ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 5

Question list
Search
Search

A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?

A.

Two indexers not in a cluster, assuming users run many long searches.

A.

Two indexers not in a cluster, assuming users run many long searches.

Answers
B.

Three indexers not in a cluster, assuming a long data retention period.

B.

Three indexers not in a cluster, assuming a long data retention period.

Answers
C.

Two indexers clustered, assuming high availability is the greatest priority.

C.

Two indexers clustered, assuming high availability is the greatest priority.

Answers
D.

Two indexers clustered, assuming a high volume of saved/scheduled searches.

D.

Two indexers clustered, assuming a high volume of saved/scheduled searches.

Answers
Suggested answer: C

Explanation:

Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer's needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer's data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.

To reduce the captain's work load in a search head cluster, what setting will prevent scheduled searches from running on the captain?

A.

adhoc_searchhead = true (on all members)

A.

adhoc_searchhead = true (on all members)

Answers
B.

adhoc_searchhead = true (on the current captain)

B.

adhoc_searchhead = true (on the current captain)

Answers
C.

captain_is_adhoc_searchhead = true (on all members)

C.

captain_is_adhoc_searchhead = true (on all members)

Answers
D.

captain_is_adhoc_searchhead = true (on the current captain)

D.

captain_is_adhoc_searchhead = true (on the current captain)

Answers
Suggested answer: D

Explanation:

To reduce the captain's work load in a search head cluster, the setting that will prevent scheduled searches from running on the captain is captain_is_adhoc_searchhead = true (on the current captain). This setting will designate the current captain as an ad hoc search head, which means that it will not run any scheduled searches, but only ad hoc searches initiated by users. This will reduce the captain's work load and improve the search head cluster performance. The adhoc_searchhead = true (on all members) setting will designate all search head cluster members as ad hoc search heads, which means that none of them will run any scheduled searches, which is not desirable. The adhoc_searchhead = true (on the current captain) setting will have no effect, as this setting is ignored by the captain. The captain_is_adhoc_searchhead = true (on all members) setting will have no effect, as this setting is only applied to the current captain. For more information, seeConfigure the captain as an ad hoc search headin the Splunk documentation.

Where does the Splunk deployer send apps by default?

A.

etc/slave-apps/<app-name>/default

A.

etc/slave-apps/<app-name>/default

Answers
B.

etc/deploy-apps/<app-name>/default

B.

etc/deploy-apps/<app-name>/default

Answers
C.

etc/apps/<appname>/default

C.

etc/apps/<appname>/default

Answers
D.

etc/shcluster/<app-name>/default

D.

etc/shcluster/<app-name>/default

Answers
Suggested answer: D

Explanation:

The Splunk deployer sends apps to the search head cluster members by default to the path etc/shcluster//default. The deployer is a Splunk component that distributes apps and configurations to members of a search head cluster.

Splunk's documentation recommends placing the configuration bundle in the $SPLUNK_HOME/etc/shcluster/apps directory on the deployer, which then gets distributed to the search head cluster members. However, it should be noted that within each app's directory, configurations can be under default or local subdirectories, with local taking precedence over default for configurations. The reference to etc/shcluster//default is not a standard directory structure and might be a misunderstanding. The correct path where the deployer pushes configuration bundles is $SPLUNK_HOME/etc/shcluster/apps

If .delta replication fails during knowledge bundle replication, what is the fall-back method for Splunk?

A.

.Restart splunkd.

A.

.Restart splunkd.

Answers
B.

.delta replication.

B.

.delta replication.

Answers
C.

.bundle replication.

C.

.bundle replication.

Answers
D.

Restart mongod.

D.

Restart mongod.

Answers
Suggested answer: C

Explanation:

This is the fall-back method for Splunk if .delta replication fails during knowledge bundle replication.Knowledge bundle replication is the process of distributing the knowledge objects, such as lookups, macros, and field extractions, from the search head cluster to the indexer cluster1.Splunk uses two methods of knowledge bundle replication: .delta replication and .bundle replication1..Delta replication is the default and preferred method, as it only replicates the changes or updates to the knowledge objects, which reduces the network traffic and disk space usage1.However, if .delta replication fails for some reason, such as corrupted files or network errors, Splunk automatically switches to .bundle replication, which replicates the entire knowledge bundle, regardless of the changes or updates1.This ensures that the knowledge objects are always synchronized between the search head cluster and the indexer cluster, but it also consumes more network bandwidth and disk space1. The other options are not valid fall-back methods for Splunk.Option A, restarting splunkd, is not a method of knowledge bundle replication, but a way to restart the Splunk daemon on a node2. This may or may not fix the .delta replication failure, but it does not guarantee the synchronization of the knowledge objects.Option B, .delta replication, is not a fall-back method, but the primary method of knowledge bundle replication, which is assumed to have failed in the question1.Option D, restarting mongod, is not a method of knowledge bundle replication, but a way to restart the MongoDB daemon on a node3.This is not related to the knowledge bundle replication, but to the KV store replication, which is a different process3. Therefore, option C is the correct answer, and options A, B, and D are incorrect.

1: How knowledge bundle replication works2: Start and stop Splunk Enterprise3: Restart the KV store

In splunkd. log events written to the _internal index, which field identifies the specific log channel?

A.

component

A.

component

Answers
B.

source

B.

source

Answers
C.

sourcetype

C.

sourcetype

Answers
D.

channel

D.

channel

Answers
Suggested answer: D

Explanation:

In the context of splunkd.log events written to the _internal index, the field that identifies the specific log channel is the 'channel' field. This information is confirmed by the Splunk Common Information Model (CIM) documentation, where 'channel' is listed as a field name associated with Splunk Audit Logs.

At which default interval does metrics.log generate a periodic report regarding license utilization?

A.

10 seconds

A.

10 seconds

Answers
B.

30 seconds

B.

30 seconds

Answers
C.

60 seconds

C.

60 seconds

Answers
D.

300 seconds

D.

300 seconds

Answers
Suggested answer: C

Explanation:

The default interval at which metrics.log generates a periodic report regarding license utilization is 60 seconds. This report contains information about the license usage and quota for each Splunk instance, as well as the license pool and stack. The report is generated every 60 seconds by default, but this interval can be changed by modifying the license_usage stanza in the metrics.conf file. The other intervals (10 seconds, 30 seconds, and 300 seconds) are not the default values, but they can be set by the administrator if needed. For more information, seeAbout metrics.logandConfigure metrics.login the Splunk documentation.

Which of the following is a good practice for a search head cluster deployer?

A.

The deployer only distributes configurations to search head cluster members when they ''phone home''.

A.

The deployer only distributes configurations to search head cluster members when they ''phone home''.

Answers
B.

The deployer must be used to distribute non-replicable configurations to search head cluster members.

B.

The deployer must be used to distribute non-replicable configurations to search head cluster members.

Answers
C.

The deployer must distribute configurations to search head cluster members to be valid configurations.

C.

The deployer must distribute configurations to search head cluster members to be valid configurations.

Answers
D.

The deployer only distributes configurations to search head cluster members with splunk apply shcluster-bundle.

D.

The deployer only distributes configurations to search head cluster members with splunk apply shcluster-bundle.

Answers
Suggested answer: B

Explanation:

The following is a good practice for a search head cluster deployer: The deployer must be used to distribute non-replicable configurations to search head cluster members. Non-replicable configurations are the configurations that are not replicated by the search factor, such as the apps and the server.conf settings. The deployer is the Splunk server role that distributes these configurations to the search head cluster members, ensuring that they have the same configuration. The deployer does not only distribute configurations to search head cluster members when they ''phone home'', as this would cause configuration inconsistencies and delays. The deployer does not distribute configurations to search head cluster members to be valid configurations, as this implies that the configurations are invalid without the deployer. The deployer does not only distribute configurations to search head cluster members with splunk apply shcluster-bundle, as this would require manual intervention by the administrator. For more information, seeUse the deployer to distribute apps and configuration updatesin the Splunk documentation.

A new Splunk customer is using syslog to collect data from their network devices on port 514. What is the best practice for ingesting this data into Splunk?

A.

Configure syslog to send the data to multiple Splunk indexers.

A.

Configure syslog to send the data to multiple Splunk indexers.

Answers
B.

Use a Splunk indexer to collect a network input on port 514 directly.

B.

Use a Splunk indexer to collect a network input on port 514 directly.

Answers
C.

Use a Splunk forwarder to collect the input on port 514 and forward the data.

C.

Use a Splunk forwarder to collect the input on port 514 and forward the data.

Answers
D.

Configure syslog to write logs and use a Splunk forwarder to collect the logs.

D.

Configure syslog to write logs and use a Splunk forwarder to collect the logs.

Answers
Suggested answer: D

Explanation:

The best practice for ingesting syslog data from network devices on port 514 into Splunk is to configure syslog to write logs and use a Splunk forwarder to collect the logs. This practice will ensure that the data is reliably collected and forwarded to Splunk, without losing any data or overloading the Splunk indexer. Configuring syslog to send the data to multiple Splunk indexers will not guarantee data reliability, as syslog is a UDP protocol that does not provide acknowledgment or delivery confirmation. Using a Splunk indexer to collect a network input on port 514 directly will not provide data reliability or load balancing, as the indexer may not be able to handle the incoming data volume or distribute it to other indexers. Using a Splunk forwarder to collect the input on port 514 and forward the data will not provide data reliability, as the forwarder may not be able to receive the data from syslog or buffer it in case of network issues. For more information, see [Get data from TCP and UDP ports] and [Best practices for syslog data] in the Splunk documentation.

Which Splunk internal index contains license-related events?

A.

_audit

A.

_audit

Answers
B.

_license

B.

_license

Answers
C.

_internal

C.

_internal

Answers
D.

_introspection

D.

_introspection

Answers
Suggested answer: C

Explanation:

The _internal index contains license-related events, such as the license usage, the license quota, the license pool, the license stack, and the license violations. These events are logged by the license manager in the license_usage.log file, which is part of the _internal index. The _audit index contains audit events, such as user actions, configuration changes, and search activity. These events are logged by the audit trail in the audit.log file, which is part of the _audit index. The _license index does not exist in Splunk, as the license-related events are stored in the _internal index. The _introspection index contains platform instrumentation data, such as the resource usage, the disk objects, the search activity, and the data ingestion. These data are logged by the introspection generator in various log files, such as resource_usage.log, disk_objects.log, search_activity.log, and data_ingestion.log, which are part of the _introspection index. For more information, seeAbout Splunk Enterprise loggingand [About the _internal index] in the Splunk documentation.

Which of the following statements describe a Search Head Cluster (SHC) captain? (Select all that apply.)

A.

Is the job scheduler for the entire SHC.

A.

Is the job scheduler for the entire SHC.

Answers
B.

Manages alert action suppressions (throttling).

B.

Manages alert action suppressions (throttling).

Answers
C.

Synchronizes the member list with the KV store primary.

C.

Synchronizes the member list with the KV store primary.

Answers
D.

Replicates the SHC's knowledge bundle to the search peers.

D.

Replicates the SHC's knowledge bundle to the search peers.

Answers
Suggested answer: A, D

Explanation:

The following statements describe a search head cluster captain:

Is the job scheduler for the entire search head cluster. The captain is responsible for scheduling and dispatching the searches that run on the search head cluster, as well as coordinating the search results from the search peers. The captain also ensures that the scheduled searches are balanced across the search head cluster members and that the search concurrency limits are enforced.

Replicates the search head cluster's knowledge bundle to the search peers. The captain is responsible for creating and distributing the knowledge bundle to the search peers, which contains the knowledge objects that are required for the searches. The captain also ensures that the knowledge bundle is consistent and up-to-date across the search head cluster and the search peers. The following statements do not describe a search head cluster captain:

Manages alert action suppressions (throttling). Alert action suppressions are the settings that prevent an alert from triggering too frequently or too many times. These settings are managed by the search head that runs the alert, not by the captain. The captain does not have any special role in managing alert action suppressions.

Synchronizes the member list with the KV store primary. The member list is the list of search head cluster members that are active and available. The KV store primary is the search head cluster member that is responsible for replicating the KV store data to the other members. These roles are not related to the captain, and the captain does not synchronize them. The member list and the KV store primary are determined by the RAFT consensus algorithm, which is independent of the captain election. For more information, see [About the captain and the captain election] and [About KV store and search head clusters] in the Splunk documentation.

Total 160 questions
Go to page: of 16