ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 8

Question list
Search
Search

When should multiple search pipelines be enabled?

A.

Only if disk IOPS is at 800 or better.

A.

Only if disk IOPS is at 800 or better.

Answers
B.

Only if there are fewer than twelve concurrent users.

B.

Only if there are fewer than twelve concurrent users.

Answers
C.

Only if running Splunk Enterprise version 6.6 or later.

C.

Only if running Splunk Enterprise version 6.6 or later.

Answers
D.

Only if CPU and memory resources are significantly under-utilized.

D.

Only if CPU and memory resources are significantly under-utilized.

Answers
Suggested answer: D

Explanation:

Multiple search pipelines should be enabled only if CPU and memory resources are significantly under-utilized. Search pipelines are the processes that execute search commands and return results. Multiple search pipelines can improve the search performance by running concurrent searches in parallel. However, multiple search pipelines also consume more CPU and memory resources, which can affect the overall system performance. Therefore, multiple search pipelines should be enabled only if there are enough CPU and memory resources available, and if the system is not bottlenecked by disk I/O or network bandwidth.The number of concurrent users, the disk IOPS, and the Splunk Enterprise version are not relevant factors for enabling multiple search pipelines

Of the following types of files within an index bucket, which file type may consume the most disk?

A.

Rawdata

A.

Rawdata

Answers
B.

Bloom filter

B.

Bloom filter

Answers
C.

Metadata (.data)

C.

Metadata (.data)

Answers
D.

Inverted index (.tsidx)

D.

Inverted index (.tsidx)

Answers
Suggested answer: A

Explanation:

Of the following types of files within an index bucket, the rawdata file type may consume the most disk. The rawdata file type contains the compressed and encrypted raw data that Splunk has ingested. The rawdata file type is usually the largest file type in a bucket, because it stores the original data without any filtering or extraction. The bloom filter file type contains a probabilistic data structure that is used to determine if a bucket contains events that match a given search. The bloom filter file type is usually very small, because it only stores a bit array of hashes. The metadata (.data) file type contains information about the bucket properties, such as the earliest and latest event timestamps, the number of events, and the size of the bucket. The metadata file type is also usually very small, because it only stores a few lines of text. The inverted index (.tsidx) file type contains the time-series index that maps the timestamps and event IDs of the raw data.The inverted index file type can vary in size depending on the number and frequency of events, but it is usually smaller than the rawdata file type

When converting from a single-site to a multi-site cluster, what happens to existing single-site clustered buckets?

A.

They will continue to replicate within the origin site and age out based on existing policies.

A.

They will continue to replicate within the origin site and age out based on existing policies.

Answers
B.

They will maintain replication as required according to the single-site policies, but never age out.

B.

They will maintain replication as required according to the single-site policies, but never age out.

Answers
C.

They will be replicated across all peers in the multi-site cluster and age out based on existing policies.

C.

They will be replicated across all peers in the multi-site cluster and age out based on existing policies.

Answers
D.

They will stop replicating within the single-site and remain on the indexer they reside on and age out according to existing policies.

D.

They will stop replicating within the single-site and remain on the indexer they reside on and age out according to existing policies.

Answers
Suggested answer: D

Explanation:

When converting from a single-site to a multi-site cluster, existing single-site clustered buckets will maintain replication as required according to the single-site policies, but never age out. Single-site clustered buckets are buckets that were created before the conversion to a multi-site cluster. These buckets will continue to follow the single-site replication and search factors, meaning that they will have the same number of copies and searchable copies across the cluster, regardless of the site. These buckets will never age out, meaning that they will never be frozen or deleted, unless they are manually converted to multi-site buckets. Single-site clustered buckets will not continue to replicate within the origin site, because they will be distributed across the cluster according to the single-site policies. Single-site clustered buckets will not be replicated across all peers in the multi-site cluster, because they will follow the single-site replication factor, which may be lower than the multi-site total replication factor.Single-site clustered buckets will not stop replicating within the single-site and remain on the indexer they reside on, because they will still be subject to the replication and availability rules of the cluster

Which of the following should be done when installing Enterprise Security on a Search Head Cluster? (Select all that apply.)

A.

Install Enterprise Security on the deployer.

A.

Install Enterprise Security on the deployer.

Answers
B.

Install Enterprise Security on a staging instance.

B.

Install Enterprise Security on a staging instance.

Answers
C.

Copy the Enterprise Security configurations to the deployer.

C.

Copy the Enterprise Security configurations to the deployer.

Answers
D.

Use the deployer to deploy Enterprise Security to the cluster members.

D.

Use the deployer to deploy Enterprise Security to the cluster members.

Answers
Suggested answer: A, D

Explanation:

When installing Enterprise Security on a Search Head Cluster (SHC), the following steps should be done: Install Enterprise Security on the deployer, and use the deployer to deploy Enterprise Security to the cluster members. Enterprise Security is a premium app that provides security analytics and monitoring capabilities for Splunk. Enterprise Security can be installed on a SHC by using the deployer, which is a standalone instance that distributes apps and other configurations to the SHC members. Enterprise Security should be installed on the deployer first, and then deployed to the cluster members using the splunk apply shcluster-bundle command. Enterprise Security should not be installed on a staging instance, because a staging instance is not part of the SHC deployment process. Enterprise Security configurations should not be copied to the deployer, because they are already included in the Enterprise Security app package.

Splunk configuration parameter settings can differ between multiple .conf files of the same name contained within different apps. Which of the following directories has the highest precedence?

A.

System local directory.

A.

System local directory.

Answers
B.

System default directory.

B.

System default directory.

Answers
C.

App local directories, in ASCII order.

C.

App local directories, in ASCII order.

Answers
D.

App default directories, in ASCII order.

D.

App default directories, in ASCII order.

Answers
Suggested answer: A

Explanation:

The system local directory has the highest precedence among the following directories that contain Splunk configuration files of the same name within different apps. Splunk configuration files are stored in various directories under the SPLUNK_HOME/etc directory. The precedence of these directories determines which configuration file settings take effect when there are conflicts or overlaps. The system local directory, which is located at SPLUNK_HOME/etc/system/local, has the highest precedence among all directories, because it contains the system-level configurations that are specific to the instance. The system default directory, which is located at SPLUNK_HOME/etc/system/default, has the lowest precedence among all directories, because it contains the system-level configurations that are provided by Splunk and should not be modified. The app local directories, which are located at SPLUNK_HOME/etc/apps/APP_NAME/local, have a higher precedence than the app default directories, which are located at SPLUNK_HOME/etc/apps/APP_NAME/default, because the local directories contain the app-level configurations that are specific to the instance, while the default directories contain the app-level configurations that are provided by the app and should not be modified. The app local and default directories have different precedences depending on the ASCII order of the app names, with the app names that come later in the ASCII order having higher precedences.

Which of the following is an indexer clustering requirement?

A.

Must use shared storage.

A.

Must use shared storage.

Answers
B.

Must reside on a dedicated rack.

B.

Must reside on a dedicated rack.

Answers
C.

Must have at least three members.

C.

Must have at least three members.

Answers
D.

Must share the same license pool.

D.

Must share the same license pool.

Answers
Suggested answer: D

Explanation:

An indexer clustering requirement is that the cluster members must share the same license pool and license master. A license pool is a group of licenses that are assigned to a set of Splunk instances. A license master is a Splunk instance that manages the distribution and enforcement of licenses in a pool. In an indexer cluster, all cluster members must belong to the same license pool and report to the same license master, to ensure that the cluster does not exceed the license limit and that the license violations are handled consistently. An indexer cluster does not require shared storage, because each cluster member has its own local storage for the index data. An indexer cluster does not have to reside on a dedicated rack, because the cluster members can be located on different physical or virtual machines, as long as they can communicate with each other.An indexer cluster does not have to have at least three members, because a cluster can have as few as two members, although this is not recommended for high availability

What is the algorithm used to determine captaincy in a Splunk search head cluster?

A.

Raft distributed consensus.

A.

Raft distributed consensus.

Answers
B.

Rapt distributed consensus.

B.

Rapt distributed consensus.

Answers
C.

Rift distributed consensus.

C.

Rift distributed consensus.

Answers
D.

Round-robin distribution consensus.

D.

Round-robin distribution consensus.

Answers
Suggested answer: A

Explanation:

The algorithm used to determine captaincy in a Splunk search head cluster is Raft distributed consensus. Raft is a consensus algorithm that is used to elect a leader among a group of nodes in a distributed system. In a Splunk search head cluster, Raft is used to elect a captain among the cluster members. The captain is the cluster member that is responsible for coordinating the search activities, replicating the configurations and apps, and pushing the knowledge bundles to the search peers. The captain is dynamically elected based on various criteria, such as CPU load, network latency, and search load. The captain can change over time, depending on the availability and performance of the cluster members.Rapt, Rift, and Round-robin are not valid algorithms for determining captaincy in a Splunk search head cluster

Which of the following statements about integrating with third-party systems is true? (Select all that apply.)

A.

A Hadoop application can search data in Splunk.

A.

A Hadoop application can search data in Splunk.

Answers
B.

Splunk can search data in the Hadoop File System (HDFS).

B.

Splunk can search data in the Hadoop File System (HDFS).

Answers
C.

You can use Splunk alerts to provision actions on a third-party system.

C.

You can use Splunk alerts to provision actions on a third-party system.

Answers
D.

You can forward data from Splunk forwarder to a third-party system without indexing it first.

D.

You can forward data from Splunk forwarder to a third-party system without indexing it first.

Answers
Suggested answer: C, D

Explanation:

The following statements about integrating with third-party systems are true: You can use Splunk alerts to provision actions on a third-party system, and you can forward data from Splunk forwarder to a third-party system without indexing it first. Splunk alerts are triggered events that can execute custom actions, such as sending an email, running a script, or calling a webhook. Splunk alerts can be used to integrate with third-party systems, such as ticketing systems, notification services, or automation platforms. For example, you can use Splunk alerts to create a ticket in ServiceNow, send a message to Slack, or trigger a workflow in Ansible. Splunk forwarders are Splunk instances that collect and forward data to other Splunk instances, such as indexers or heavy forwarders. Splunk forwarders can also forward data to third-party systems, such as Hadoop, Kafka, or AWS Kinesis, without indexing it first. This can be useful for sending data to other data processing or storage systems, or for integrating with other analytics or monitoring tools. A Hadoop application cannot search data in Splunk, because Splunk does not provide a native interface for Hadoop applications to access Splunk data.Splunk can search data in the Hadoop File System (HDFS), but only by using the Hadoop Connect app, which is a Splunk app that enables Splunk to index and search data stored in HDFS

As a best practice, where should the internal licensing logs be stored?

A.

Indexing layer.

A.

Indexing layer.

Answers
B.

License server.

B.

License server.

Answers
C.

Deployment layer.

C.

Deployment layer.

Answers
D.

Search head layer.

D.

Search head layer.

Answers
Suggested answer: B

Explanation:

As a best practice, the internal licensing logs should be stored on the license server. The license server is a Splunk instance that manages the distribution and enforcement of licenses in a Splunk deployment. The license server generates internal licensing logs that contain information about the license usage, violations, warnings, and pools. The internal licensing logs should be stored on the license server itself, because they are relevant to the license server's role and function. Storing the internal licensing logs on the license server also simplifies the license monitoring and troubleshooting process. The internal licensing logs should not be stored on the indexing layer, the deployment layer, or the search head layer, because they are not related to the roles and functions of these layers.Storing the internal licensing logs on these layers would also increase the network traffic and disk space consumption

How does the average run time of all searches relate to the available CPU cores on the indexers?

A.

Average run time is independent of the number of CPU cores on the indexers.

A.

Average run time is independent of the number of CPU cores on the indexers.

Answers
B.

Average run time decreases as the number of CPU cores on the indexers decreases.

B.

Average run time decreases as the number of CPU cores on the indexers decreases.

Answers
C.

Average run time increases as the number of CPU cores on the indexers decreases.

C.

Average run time increases as the number of CPU cores on the indexers decreases.

Answers
D.

Average run time increases as the number of CPU cores on the indexers increases.

D.

Average run time increases as the number of CPU cores on the indexers increases.

Answers
Suggested answer: C

Explanation:

The average run time of all searches increases as the number of CPU cores on the indexers decreases. The CPU cores are the processing units that execute the instructions and calculations for the data. The number of CPU cores on the indexers affects the search performance, because the indexers are responsible for retrieving and filtering the data from the indexes. The more CPU cores the indexers have, the faster they can process the data and return the results. The less CPU cores the indexers have, the slower they can process the data and return the results. Therefore, the average run time of all searches is inversely proportional to the number of CPU cores on the indexers. The average run time of all searches is not independent of the number of CPU cores on the indexers, because the CPU cores are an important factor for the search performance. The average run time of all searches does not decrease as the number of CPU cores on the indexers decreases, because this would imply that the search performance improves with less CPU cores, which is not true.The average run time of all searches does not increase as the number of CPU cores on the indexers increases, because this would imply that the search performance worsens with more CPU cores, which is not true

Total 160 questions
Go to page: of 16