Splunk SPLK-2002 Practice Test - Questions Answers, Page 5

List of questions
Question 41

A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?
Two indexers not in a cluster, assuming users run many long searches.
Three indexers not in a cluster, assuming a long data retention period.
Two indexers clustered, assuming high availability is the greatest priority.
Two indexers clustered, assuming a high volume of saved/scheduled searches.
Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer's needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer's data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.
Question 42

To reduce the captain's work load in a search head cluster, what setting will prevent scheduled searches from running on the captain?
adhoc_searchhead = true (on all members)
adhoc_searchhead = true (on the current captain)
captain_is_adhoc_searchhead = true (on all members)
captain_is_adhoc_searchhead = true (on the current captain)
To reduce the captain's work load in a search head cluster, the setting that will prevent scheduled searches from running on the captain is captain_is_adhoc_searchhead = true (on the current captain). This setting will designate the current captain as an ad hoc search head, which means that it will not run any scheduled searches, but only ad hoc searches initiated by users. This will reduce the captain's work load and improve the search head cluster performance. The adhoc_searchhead = true (on all members) setting will designate all search head cluster members as ad hoc search heads, which means that none of them will run any scheduled searches, which is not desirable. The adhoc_searchhead = true (on the current captain) setting will have no effect, as this setting is ignored by the captain. The captain_is_adhoc_searchhead = true (on all members) setting will have no effect, as this setting is only applied to the current captain. For more information, seeConfigure the captain as an ad hoc search headin the Splunk documentation.
Question 43

Where does the Splunk deployer send apps by default?
etc/slave-apps/<app-name>/default
etc/deploy-apps/<app-name>/default
etc/apps/<appname>/default
etc/shcluster/<app-name>/default
The Splunk deployer sends apps to the search head cluster members by default to the path etc/shcluster//default. The deployer is a Splunk component that distributes apps and configurations to members of a search head cluster.
Splunk's documentation recommends placing the configuration bundle in the $SPLUNK_HOME/etc/shcluster/apps directory on the deployer, which then gets distributed to the search head cluster members. However, it should be noted that within each app's directory, configurations can be under default or local subdirectories, with local taking precedence over default for configurations. The reference to etc/shcluster//default is not a standard directory structure and might be a misunderstanding. The correct path where the deployer pushes configuration bundles is $SPLUNK_HOME/etc/shcluster/apps
Question 44

If .delta replication fails during knowledge bundle replication, what is the fall-back method for Splunk?
.Restart splunkd.
.delta replication.
.bundle replication.
Restart mongod.
This is the fall-back method for Splunk if .delta replication fails during knowledge bundle replication.Knowledge bundle replication is the process of distributing the knowledge objects, such as lookups, macros, and field extractions, from the search head cluster to the indexer cluster1.Splunk uses two methods of knowledge bundle replication: .delta replication and .bundle replication1..Delta replication is the default and preferred method, as it only replicates the changes or updates to the knowledge objects, which reduces the network traffic and disk space usage1.However, if .delta replication fails for some reason, such as corrupted files or network errors, Splunk automatically switches to .bundle replication, which replicates the entire knowledge bundle, regardless of the changes or updates1.This ensures that the knowledge objects are always synchronized between the search head cluster and the indexer cluster, but it also consumes more network bandwidth and disk space1. The other options are not valid fall-back methods for Splunk.Option A, restarting splunkd, is not a method of knowledge bundle replication, but a way to restart the Splunk daemon on a node2. This may or may not fix the .delta replication failure, but it does not guarantee the synchronization of the knowledge objects.Option B, .delta replication, is not a fall-back method, but the primary method of knowledge bundle replication, which is assumed to have failed in the question1.Option D, restarting mongod, is not a method of knowledge bundle replication, but a way to restart the MongoDB daemon on a node3.This is not related to the knowledge bundle replication, but to the KV store replication, which is a different process3. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: How knowledge bundle replication works2: Start and stop Splunk Enterprise3: Restart the KV store
Question 45

In splunkd. log events written to the _internal index, which field identifies the specific log channel?
component
source
sourcetype
channel
In the context of splunkd.log events written to the _internal index, the field that identifies the specific log channel is the 'channel' field. This information is confirmed by the Splunk Common Information Model (CIM) documentation, where 'channel' is listed as a field name associated with Splunk Audit Logs.
Question 46

At which default interval does metrics.log generate a periodic report regarding license utilization?
10 seconds
30 seconds
60 seconds
300 seconds
The default interval at which metrics.log generates a periodic report regarding license utilization is 60 seconds. This report contains information about the license usage and quota for each Splunk instance, as well as the license pool and stack. The report is generated every 60 seconds by default, but this interval can be changed by modifying the license_usage stanza in the metrics.conf file. The other intervals (10 seconds, 30 seconds, and 300 seconds) are not the default values, but they can be set by the administrator if needed. For more information, seeAbout metrics.logandConfigure metrics.login the Splunk documentation.
Question 47

Which of the following is a good practice for a search head cluster deployer?
The deployer only distributes configurations to search head cluster members when they ''phone home''.
The deployer must be used to distribute non-replicable configurations to search head cluster members.
The deployer must distribute configurations to search head cluster members to be valid configurations.
The deployer only distributes configurations to search head cluster members with splunk apply shcluster-bundle.
The following is a good practice for a search head cluster deployer: The deployer must be used to distribute non-replicable configurations to search head cluster members. Non-replicable configurations are the configurations that are not replicated by the search factor, such as the apps and the server.conf settings. The deployer is the Splunk server role that distributes these configurations to the search head cluster members, ensuring that they have the same configuration. The deployer does not only distribute configurations to search head cluster members when they ''phone home'', as this would cause configuration inconsistencies and delays. The deployer does not distribute configurations to search head cluster members to be valid configurations, as this implies that the configurations are invalid without the deployer. The deployer does not only distribute configurations to search head cluster members with splunk apply shcluster-bundle, as this would require manual intervention by the administrator. For more information, seeUse the deployer to distribute apps and configuration updatesin the Splunk documentation.
Question 48

A new Splunk customer is using syslog to collect data from their network devices on port 514. What is the best practice for ingesting this data into Splunk?
Configure syslog to send the data to multiple Splunk indexers.
Use a Splunk indexer to collect a network input on port 514 directly.
Use a Splunk forwarder to collect the input on port 514 and forward the data.
Configure syslog to write logs and use a Splunk forwarder to collect the logs.
The best practice for ingesting syslog data from network devices on port 514 into Splunk is to configure syslog to write logs and use a Splunk forwarder to collect the logs. This practice will ensure that the data is reliably collected and forwarded to Splunk, without losing any data or overloading the Splunk indexer. Configuring syslog to send the data to multiple Splunk indexers will not guarantee data reliability, as syslog is a UDP protocol that does not provide acknowledgment or delivery confirmation. Using a Splunk indexer to collect a network input on port 514 directly will not provide data reliability or load balancing, as the indexer may not be able to handle the incoming data volume or distribute it to other indexers. Using a Splunk forwarder to collect the input on port 514 and forward the data will not provide data reliability, as the forwarder may not be able to receive the data from syslog or buffer it in case of network issues. For more information, see [Get data from TCP and UDP ports] and [Best practices for syslog data] in the Splunk documentation.
Question 49

Which Splunk internal index contains license-related events?
_audit
_license
_internal
_introspection
The _internal index contains license-related events, such as the license usage, the license quota, the license pool, the license stack, and the license violations. These events are logged by the license manager in the license_usage.log file, which is part of the _internal index. The _audit index contains audit events, such as user actions, configuration changes, and search activity. These events are logged by the audit trail in the audit.log file, which is part of the _audit index. The _license index does not exist in Splunk, as the license-related events are stored in the _internal index. The _introspection index contains platform instrumentation data, such as the resource usage, the disk objects, the search activity, and the data ingestion. These data are logged by the introspection generator in various log files, such as resource_usage.log, disk_objects.log, search_activity.log, and data_ingestion.log, which are part of the _introspection index. For more information, seeAbout Splunk Enterprise loggingand [About the _internal index] in the Splunk documentation.
Question 50

Which of the following statements describe a Search Head Cluster (SHC) captain? (Select all that apply.)
Is the job scheduler for the entire SHC.
Manages alert action suppressions (throttling).
Synchronizes the member list with the KV store primary.
Replicates the SHC's knowledge bundle to the search peers.
The following statements describe a search head cluster captain:
Is the job scheduler for the entire search head cluster. The captain is responsible for scheduling and dispatching the searches that run on the search head cluster, as well as coordinating the search results from the search peers. The captain also ensures that the scheduled searches are balanced across the search head cluster members and that the search concurrency limits are enforced.
Replicates the search head cluster's knowledge bundle to the search peers. The captain is responsible for creating and distributing the knowledge bundle to the search peers, which contains the knowledge objects that are required for the searches. The captain also ensures that the knowledge bundle is consistent and up-to-date across the search head cluster and the search peers. The following statements do not describe a search head cluster captain:
Manages alert action suppressions (throttling). Alert action suppressions are the settings that prevent an alert from triggering too frequently or too many times. These settings are managed by the search head that runs the alert, not by the captain. The captain does not have any special role in managing alert action suppressions.
Synchronizes the member list with the KV store primary. The member list is the list of search head cluster members that are active and available. The KV store primary is the search head cluster member that is responsible for replicating the KV store data to the other members. These roles are not related to the captain, and the captain does not synchronize them. The member list and the KV store primary are determined by the RAFT consensus algorithm, which is independent of the captain election. For more information, see [About the captain and the captain election] and [About KV store and search head clusters] in the Splunk documentation.
Question