ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 3

Question list
Search
Search

Which Splunk Enterprise offering has its own license?

A.

Splunk Cloud Forwarder

A.

Splunk Cloud Forwarder

Answers
B.

Splunk Heavy Forwarder

B.

Splunk Heavy Forwarder

Answers
C.

Splunk Universal Forwarder

C.

Splunk Universal Forwarder

Answers
D.

Splunk Forwarder Management

D.

Splunk Forwarder Management

Answers
Suggested answer: C

Explanation:

The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For more information, see [About forwarder licensing] in the Splunk documentation.

Which component in the splunkd.log will log information related to bad event breaking?

A.

Audittrail

A.

Audittrail

Answers
B.

EventBreaking

B.

EventBreaking

Answers
C.

IndexingPipeline

C.

IndexingPipeline

Answers
D.

AggregatorMiningProcessor

D.

AggregatorMiningProcessor

Answers
Suggested answer: D

Explanation:

The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information, seeAbout Splunk Enterprise loggingand [Configure event line breaking] in the Splunk documentation.

Which Splunk server role regulates the functioning of indexer cluster?

A.

Indexer

A.

Indexer

Answers
B.

Deployer

B.

Deployer

Answers
C.

Master Node

C.

Master Node

Answers
D.

Monitoring Console

D.

Monitoring Console

Answers
Suggested answer: C

Explanation:

The master node is the Splunk server role that regulates the functioning of the indexer cluster. The master node coordinates the activities of the peer nodes, such as data replication, data searchability, and data recovery. The master node also manages the cluster configuration bundle and distributes it to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it searchable. The deployer is the Splunk server role that distributes apps and configuration updates to the search head cluster members. The monitoring console is the Splunk server role that monitors the health and performance of the Splunk deployment. For more information, seeAbout indexer clusters and index replicationin the Splunk documentation.

When adding or rejoining a member to a search head cluster, the following error is displayed:

Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.

What corrective action should be taken?

A.

Restart the search head.

A.

Restart the search head.

Answers
B.

Run the splunk apply shcluster-bundle command from the deployer.

B.

Run the splunk apply shcluster-bundle command from the deployer.

Answers
C.

Run the clean raft command on all members of the search head cluster.

C.

Run the clean raft command on all members of the search head cluster.

Answers
D.

Run the splunk resync shcluster-replicated-config command on this member.

D.

Run the splunk resync shcluster-replicated-config command on this member.

Answers
Suggested answer: D

Explanation:

When adding or rejoining a member to a search head cluster, and the following error is displayed: Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.

The corrective action that should be taken is to run the splunk resync shcluster-replicated-config command on this member. This command will delete the existing configuration files on this member and replace them with the latest configuration files from the captain. This will ensure that the member has the same configuration as the rest of the cluster. Restarting the search head, running the splunk apply shcluster-bundle command from the deployer, or running the clean raft command on all members of the search head cluster are not the correct actions to take in this scenario. For more information, seeResolve configuration inconsistencies across cluster membersin the Splunk documentation.

Which of the following commands is used to clear the KV store?

A.

splunk clean kvstore

A.

splunk clean kvstore

Answers
B.

splunk clear kvstore

B.

splunk clear kvstore

Answers
C.

splunk delete kvstore

C.

splunk delete kvstore

Answers
D.

splunk reinitialize kvstore

D.

splunk reinitialize kvstore

Answers
Suggested answer: A

Explanation:

The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, seeUse the CLI to manage the KV storein the Splunk documentation.

Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers. Which of the following is most likely to improve indexing performance?

A.

Increase the maximum number of hot buckets in indexes.conf

A.

Increase the maximum number of hot buckets in indexes.conf

Answers
B.

Increase the number of parallel ingestion pipelines in server.conf

B.

Increase the number of parallel ingestion pipelines in server.conf

Answers
C.

Decrease the maximum size of the search pipelines in limits.conf

C.

Decrease the maximum size of the search pipelines in limits.conf

Answers
D.

Decrease the maximum concurrent scheduled searches in limits.conf

D.

Decrease the maximum concurrent scheduled searches in limits.conf

Answers
Suggested answer: B

Explanation:

Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, seeConfigure parallel ingestion pipelinesin the Splunk documentation.

The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. How does this divide between files in the index?

A.

rawdata is: 10%, tsidx is: 40%

A.

rawdata is: 10%, tsidx is: 40%

Answers
B.

rawdata is: 15%, tsidx is: 35%

B.

rawdata is: 15%, tsidx is: 35%

Answers
C.

rawdata is: 35%, tsidx is: 15%

C.

rawdata is: 35%, tsidx is: 15%

Answers
D.

rawdata is: 40%, tsidx is: 10%

D.

rawdata is: 40%, tsidx is: 10%

Answers
Suggested answer: B

Explanation:

The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. This divides between files in the index as follows: rawdata is 15%, tsidx is 35%. The rawdata is the compressed version of the original data, which typically takes about 15% of the original data size. The tsidx is the index file that contains the time-series metadata and the inverted index, which typically takes about 35% of the original data size. The total size of the rawdata and the tsidx is about 50% of the original data size. For more information, see [Estimate your storage requirements] in the Splunk documentation.

In an existing Splunk environment, the new index buckets that are created each day are about half the size of the incoming data. Within each bucket, about 30% of the space is used for rawdata and about 70% for index files.

What additional information is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented?

A.

Total daily indexing volume, number of peer nodes, and number of accelerated searches.

A.

Total daily indexing volume, number of peer nodes, and number of accelerated searches.

Answers
B.

Total daily indexing volume, number of peer nodes, replication factor, and search factor.

B.

Total daily indexing volume, number of peer nodes, replication factor, and search factor.

Answers
C.

Total daily indexing volume, replication factor, search factor, and number of search heads.

C.

Total daily indexing volume, replication factor, search factor, and number of search heads.

Answers
D.

Replication factor, search factor, number of accelerated searches, and total disk size across cluster.

D.

Replication factor, search factor, number of accelerated searches, and total disk size across cluster.

Answers
Suggested answer: B

Explanation:

The additional information that is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented, is the total daily indexing volume, the number of peer nodes, the replication factor, and the search factor. These information are required to estimate how much data is ingested, how many copies of raw data and searchable data are maintained, and how many indexers are involved in the cluster. The number of accelerated searches, the number of search heads, and the total disk size across the cluster are not relevant for calculating the daily disk consumption, per indexer. For more information, see [Estimate your storage requirements] in the Splunk documentation.

A three-node search head cluster is skipping a large number of searches across time. What should be done to increase scheduled search capacity on the search head cluster?

A.

Create a job server on the cluster.

A.

Create a job server on the cluster.

Answers
B.

Add another search head to the cluster.

B.

Add another search head to the cluster.

Answers
C.

server.conf captain_is_adhoc_searchhead = true.

C.

server.conf captain_is_adhoc_searchhead = true.

Answers
D.

Change limits.conf value for max_searches_per_cpu to a higher value.

D.

Change limits.conf value for max_searches_per_cpu to a higher value.

Answers
Suggested answer: D

Explanation:

Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.

The frequency in which a deployment client contacts the deployment server is controlled by what?

A.

polling_interval attribute in outputs.conf

A.

polling_interval attribute in outputs.conf

Answers
B.

phoneHomeIntervalInSecs attribute in outputs.conf

B.

phoneHomeIntervalInSecs attribute in outputs.conf

Answers
C.

polling_interval attribute in deploymentclient.conf

C.

polling_interval attribute in deploymentclient.conf

Answers
D.

phoneHomeIntervalInSecs attribute in deploymentclient.conf

D.

phoneHomeIntervalInSecs attribute in deploymentclient.conf

Answers
Suggested answer: D

Explanation:

The frequency in which a deployment client contacts the deployment server is controlled by the phoneHomeIntervalInSecs attribute in deploymentclient.conf. This attribute specifies how often the deployment client checks in with the deployment server to get updates on the apps and configurations that it should receive. The polling_interval attribute in outputs.conf controls how often the forwarder sends data to the indexer or another forwarder. The polling_interval attribute in deploymentclient.conf and the phoneHomeIntervalInSecs attribute in outputs.conf are not valid Splunk attributes. For more information, seeConfigure deployment clientsandConfigure forwarders with outputs.confin the Splunk documentation.

Total 160 questions
Go to page: of 16