ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 15

Question list
Search
Search

A customer is migrating 500 Universal Forwarders from an old deployment server to a new deployment server, with a different DNS name. The new deployment server is configured and running.

The old deployment server deployed an app containing an updated deploymentclient.conf file to all forwarders, pointing them to the new deployment server. The app was successfully deployed to all 500 forwarders.

Why would all of the forwarders still be phoning home to the old deployment server?

A.

There is a version mismatch between the forwarders and the new deployment server.

A.

There is a version mismatch between the forwarders and the new deployment server.

Answers
B.

The new deployment server is not accepting connections from the forwarders.

B.

The new deployment server is not accepting connections from the forwarders.

Answers
C.

The forwarders are configured to use the old deployment server in $SPLUNK_HOME/etc/system/local.

C.

The forwarders are configured to use the old deployment server in $SPLUNK_HOME/etc/system/local.

Answers
D.

The pass4SymmKey is the same on the new deployment server and the forwarders.

D.

The pass4SymmKey is the same on the new deployment server and the forwarders.

Answers
Suggested answer: C

Explanation:

All of the forwarders would still be phoning home to the old deployment server, because the forwarders are configured to use the old deployment server in $SPLUNK_HOME/etc/system/local. This is the local configuration directory that contains the settings that override the default settings in $SPLUNK_HOME/etc/system/default. The deploymentclient.conf file in the local directory specifies the targetUri of the deployment server that the forwarder contacts for configuration updates and apps. If the forwarders have the old deployment server's targetUri in the local directory, they will ignore the updated deploymentclient.conf file that was deployed by the old deployment server, because the local settings have higher precedence than the deployed settings. To fix this issue, the forwarders should either remove the deploymentclient.conf file from the local directory, or update it with the new deployment server's targetUri. Option C is the correct answer. Option A is incorrect because a version mismatch between the forwarders and the new deployment server would not prevent the forwarders from phoning home to the new deployment server, as long as they are compatible versions. Option B is incorrect because the new deployment server is configured and running, and there is no indication that it is not accepting connections from the forwarders. Option D is incorrect because the pass4SymmKey is the shared secret key that the deployment server and the forwarders use to authenticate each other.It does not affect the forwarders' ability to phone home to the new deployment server, as long as it is the same on both sides12

1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Configuredeploymentclients2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Wheretofindtheconfigurationfiles

What types of files exist in a bucket within a clustered index? (select all that apply)

A.

Inside a replicated bucket, there is only rawdata.

A.

Inside a replicated bucket, there is only rawdata.

Answers
B.

Inside a searchable bucket, there is only tsidx.

B.

Inside a searchable bucket, there is only tsidx.

Answers
C.

Inside a searchable bucket, there is tsidx and rawdata.

C.

Inside a searchable bucket, there is tsidx and rawdata.

Answers
D.

Inside a replicated bucket, there is both tsidx and rawdata.

D.

Inside a replicated bucket, there is both tsidx and rawdata.

Answers
Suggested answer: C, D

Explanation:

According to the Splunk documentation1, a bucket within a clustered index contains two key types of files: the raw data in compressed form (rawdata) and the indexes that point to the raw data (tsidx files). A bucket can be either replicated or searchable, depending on whether it has both types of files or only the rawdata file. A replicated bucket is a bucket that has been copied from one peer node to another for the purpose of data replication. A searchable bucket is a bucket that has both the rawdata and the tsidx files, and can be searched by the search heads. The types of files that exist in a bucket within a clustered index are:

Inside a searchable bucket, there is tsidx and rawdata.This is true because a searchable bucket contains both the data and the index files, and can be searched by the search heads1.

Inside a replicated bucket, there is both tsidx and rawdata. This is true because a replicated bucket can also be a searchable bucket, if it has both the data and the index files.However, not all replicated buckets are searchable, as some of them might only have the rawdata file, depending on the replication factor and the search factor settings1.

The other options are false because:

Inside a replicated bucket, there is only rawdata. This is false because a replicated bucket can also have the tsidx file, if it is a searchable bucket.A replicated bucket only has the rawdata file if it is a non-searchable bucket, which means that it cannot be searched by the search heads until it gets the tsidx file from another peer node1.

Inside a searchable bucket, there is only tsidx. This is false because a searchable bucket always has both the tsidx and the rawdata files, as they are both required for searching the data.A searchable bucket cannot exist without the rawdata file, as it contains the actual data that the tsidx file points to1.

When designing the number and size of indexes, which of the following considerations should be applied?

A.

Expected daily ingest volume, access controls, number of concurrent users

A.

Expected daily ingest volume, access controls, number of concurrent users

Answers
B.

Number of installed apps, expected daily ingest volume, data retention time policies

B.

Number of installed apps, expected daily ingest volume, data retention time policies

Answers
C.

Data retention time policies, number of installed apps, access controls

C.

Data retention time policies, number of installed apps, access controls

Answers
D.

Expected daily ingest volumes, data retention time policies, access controls

D.

Expected daily ingest volumes, data retention time policies, access controls

Answers
Suggested answer: D

Explanation:

When designing the number and size of indexes, the following considerations should be applied:

Expected daily ingest volumes: This is the amount of data that will be ingested and indexed by the Splunk platform per day. This affects the storage capacity, the indexing performance, and the license usage of the Splunk deployment.The number and size of indexes should be planned according to the expected daily ingest volumes, as well as the peak ingest volumes, to ensure that the Splunk deployment can handle the data load and meet the business requirements12.

Data retention time policies: This is the duration for which the data will be stored and searchable by the Splunk platform. This affects the storage capacity, the data availability, and the data compliance of the Splunk deployment.The number and size of indexes should be planned according to the data retention time policies, as well as the data lifecycle, to ensure that the Splunk deployment can retain the data for the desired period and meet the legal or regulatory obligations13.

Access controls: This is the mechanism for granting or restricting access to the data by the Splunk users or roles. This affects the data security, the data privacy, and the data governance of the Splunk deployment.The number and size of indexes should be planned according to the access controls, as well as the data sensitivity, to ensure that the Splunk deployment can protect the data from unauthorized or inappropriate access and meet the ethical or organizational standards14.

Option D is the correct answer because it reflects the most relevant and important considerations for designing the number and size of indexes.Option A is incorrect because the number of concurrent users is not a direct factor for designing the number and size of indexes, but rather a factor for designing the search head capacity and the search head clustering configuration5. Option B is incorrect because the number of installed apps is not a direct factor for designing the number and size of indexes, but rather a factor for designing the app compatibility and the app performance. Option C is incorrect because it omits the expected daily ingest volumes, which is a crucial factor for designing the number and size of indexes.

1:Splunk Validated Architectures2: [Indexer capacity planning]3: [Set a retirement and archiving policy for your indexes]4: [About securing Splunk Enterprise]5: [Search head capacity planning] : [App installation and management overview]

Which Splunk component is mandatory when implementing a search head cluster?

A.

Captain Server

A.

Captain Server

Answers
B.

Deployer

B.

Deployer

Answers
C.

Cluster Manager

C.

Cluster Manager

Answers
D.

RAFT Server

D.

RAFT Server

Answers
Suggested answer: B

Explanation:

This is a mandatory Splunk component when implementing a search head cluster, as it is responsible for distributing the configuration updates and app bundles to the cluster members1.The deployer is a separate instance that communicates with the cluster manager and pushes the changes to the search heads1. The other options are not mandatory components for a search head cluster.Option A, Captain Server, is not a component, but a role that is dynamically assigned to one of the search heads in the cluster2.The captain coordinates the replication and search activities among the cluster members2.Option C, Cluster Manager, is a component for an indexer cluster, not a search head cluster3.The cluster manager manages the replication and search factors, and provides a web interface for monitoring and managing the indexer cluster3.Option D, RAFT Server, is not a component, but a protocol that is used by the search head cluster to elect the captain and maintain the cluster state4. Therefore, option B is the correct answer, and options A, C, and D are incorrect.

1: Use the deployer to distribute apps and configuration updates2: About the captain3: About the cluster manager4: How a search head cluster works

When implementing KV Store Collections in a search head cluster, which of the following considerations is true?

A.

The KV Store Primary coordinates with the search head cluster captain when collection content changes.

A.

The KV Store Primary coordinates with the search head cluster captain when collection content changes.

Answers
B.

The search head cluster captain is also the KV Store Primary when collection content changes.

B.

The search head cluster captain is also the KV Store Primary when collection content changes.

Answers
C.

The KV Store Collection will not allow for changes to content if there are more than 50 search heads in the cluster.

C.

The KV Store Collection will not allow for changes to content if there are more than 50 search heads in the cluster.

Answers
D.

Each search head in the cluster independently updates its KV store collection when collection content changes.

D.

Each search head in the cluster independently updates its KV store collection when collection content changes.

Answers
Suggested answer: B

Explanation:

According to the Splunk documentation1, in a search head cluster, the KV Store Primary is the same node as the search head cluster captain. The KV Store Primary is responsible for coordinating the replication of KV Store data across the cluster members. When any node receives a write request, the KV Store delegates the write to the KV Store Primary. The KV Store keeps the reads local, however. This ensures that the KV Store data is consistent and available across the cluster.

About the app key value store

KV Store and search head clusters

Which of the following is true for indexer cluster knowledge bundles?

A.

Only app-name/local is pushed.

A.

Only app-name/local is pushed.

Answers
B.

app-name/default and app-name/local are merged before pushing.

B.

app-name/default and app-name/local are merged before pushing.

Answers
C.

Only app-name/default is pushed.

C.

Only app-name/default is pushed.

Answers
D.

app-name/default and app-name/local are pushed without change.

D.

app-name/default and app-name/local are pushed without change.

Answers
Suggested answer: B

Explanation:

According to the Splunk documentation1, indexer cluster knowledge bundles are the configuration files that the cluster master distributes to the peer nodes as part of the cluster configuration bundle. The knowledge bundles contain the knowledge objects, such as event types, tags, lookups, and so on, that are relevant for indexing and searching the data. The cluster master creates the knowledge bundles by merging the app-name/default and app-name/local directories from the apps that reside on the master node.The cluster master then pushes the knowledge bundles to the peer nodes, where they reside under the $SPLUNK_HOME/var/run directory2. The other options are false because:

Only app-name/local is pushed. This is false because the cluster master pushes both the app-name/default and app-name/local directories, after merging them, to the peer nodes.The app-name/local directory contains the local customizations of the app configuration, while the app-name/default directory contains the default app configuration3.

Only app-name/default is pushed. This is false because the cluster master pushes both the app-name/default and app-name/local directories, after merging them, to the peer nodes.The app-name/default directory contains the default app configuration, while the app-name/local directory contains the local customizations of the app configuration3.

app-name/default and app-name/local are pushed without change. This is false because the cluster master merges the app-name/default and app-name/local directories before pushing them to the peer nodes.This ensures that the peer nodes have the latest and consistent configuration of the apps3.

When preparing to ingest a new data source, which of the following is optional in the data source assessment?

A.

Data format

A.

Data format

Answers
B.

Data location

B.

Data location

Answers
C.

Data volume

C.

Data volume

Answers
D.

Data retention

D.

Data retention

Answers
Suggested answer: D

Explanation:

Data retention is optional in the data source assessment because it is not directly related to the ingestion process. Data retention is determined by the index configuration and the storage capacity of the Splunk platform. Data format, data location, and data volume are all essential information for planning how to collect, parse, and index the data source.

Drive more value through data source and use case optimization - Splunk, page 9

Data source planning for Splunk Enterprise Security

Where in the Job Inspector can details be found to help determine where performance is affected?

A.

Search Job Properties > runDuration

A.

Search Job Properties > runDuration

Answers
B.

Search Job Properties > runtime

B.

Search Job Properties > runtime

Answers
C.

Job Details Dashboard > Total Events Matched

C.

Job Details Dashboard > Total Events Matched

Answers
D.

Execution Costs > Components

D.

Execution Costs > Components

Answers
Suggested answer: D

Explanation:

This is where in the Job Inspector details can be found to help determine where performance is affected, as it shows the time and resources spent by each component of the search, such as commands, subsearches, lookups, and post-processing1.The Execution Costs > Components section can help identify the most expensive or inefficient parts of the search, and suggest ways to optimize or improve the search performance1. The other options are not as useful as the Execution Costs > Components section for finding performance issues.Option A, Search Job Properties > runDuration, shows the total time, in seconds, that the search took to run2. This can indicate the overall performance of the search, but it does not provide any details on the specific components or factors that affected the performance.Option B, Search Job Properties > runtime, shows the time, in seconds, that the search took to run on the search head2. This can indicate the performance of the search head, but it does not account for the time spent on the indexers or the network.Option C, Job Details Dashboard > Total Events Matched, shows the number of events that matched the search criteria3. This can indicate the size and scope of the search, but it does not provide any information on the performance or efficiency of the search. Therefore, option D is the correct answer, and options A, B, and C are incorrect.

1: Execution Costs > Components2: Search Job Properties3: Job Details Dashboard

Which command should be run to re-sync a stale KV Store member in a search head cluster?

A.

splunk clean kvstore -local

A.

splunk clean kvstore -local

Answers
B.

splunk resync kvstore -remote

B.

splunk resync kvstore -remote

Answers
C.

splunk resync kvstore -local

C.

splunk resync kvstore -local

Answers
D.

splunk clean eventdata -local

D.

splunk clean eventdata -local

Answers
Suggested answer: A

Explanation:

To resync a stale KV Store member in a search head cluster, you need to stop the search head that has the stale KV Store member, run the command splunk clean kvstore --local, and then restart the search head.This triggers the initial synchronization from other KV Store members12.

The command splunk resync kvstore [-source sourceId] is used to resync the entire KV Store cluster from one of the members, not a single member.This command can only be invoked from the node that is operating as search head cluster captain2.

The command splunk clean eventdata -local is used to delete all indexed data from a standalone indexer or a cluster peer node, not to resync the KV Store3.

1:How to resolve error on a search head member in the search head cluster ...

2:Resync the KV store - Splunk Documentation

3:Delete indexed data - Splunk Documentation

What is the best method for sizing or scaling a search head cluster?

A.

Estimate the maximum daily ingest volume in gigabytes and divide by the number of CPU cores per search head.

A.

Estimate the maximum daily ingest volume in gigabytes and divide by the number of CPU cores per search head.

Answers
B.

Estimate the total number of searches per day and divide by the number of CPU cores available on the search heads.

B.

Estimate the total number of searches per day and divide by the number of CPU cores available on the search heads.

Answers
C.

Divide the number of indexers by three to achieve the correct number of search heads.

C.

Divide the number of indexers by three to achieve the correct number of search heads.

Answers
D.

Estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head.

D.

Estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head.

Answers
Suggested answer: D

Explanation:

According to the Splunk blog1, the best method for sizing or scaling a search head cluster is to estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head. This gives you an idea of how many search heads you need to handle the peak search load without overloading the CPU resources. The other options are false because:

Estimating the maximum daily ingest volume in gigabytes and dividing by the number of CPU cores per search head is not a good method for sizing or scaling a search head cluster, as it does not account for the complexity and frequency of the searches.The ingest volume is more relevant for sizing or scaling the indexers, not the search heads2.

Estimating the total number of searches per day and dividing by the number of CPU cores available on the search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the concurrency and duration of the searches.The total number of searches per day is an average metric that does not reflect the peak search load or the search performance2.

Dividing the number of indexers by three to achieve the correct number of search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the search load or the search head capacity.The number of indexers is not directly proportional to the number of search heads, as different types of data and searches may require different amounts of resources2.

Total 160 questions
Go to page: of 16