ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 6

Question list
Search
Search

Before users can use a KV store, an admin must create a collection. Where is a collection is defined?

A.

kvstore.conf

A.

kvstore.conf

Answers
B.

collection.conf

B.

collection.conf

Answers
C.

collections.conf

C.

collections.conf

Answers
D.

kvcollections.conf

D.

kvcollections.conf

Answers
Suggested answer: C

Explanation:

A collection is defined in the collections.conf file, which specifies the name, schema, and permissions of the collection. The kvstore.conf file is used to configure the KV store settings, such as the port, SSL, and replication factor.The other two files do not exist1

Which search will show all deployment client messages from the client (UF)?

A.

index=_audit component=DC* host=<ds> | stats count by message

A.

index=_audit component=DC* host=<ds> | stats count by message

Answers
B.

index=_audit component=DC* host=<uf> | stats count by message

B.

index=_audit component=DC* host=<uf> | stats count by message

Answers
C.

index=_internal component= DC* host=<uf> | stats count by message

C.

index=_internal component= DC* host=<uf> | stats count by message

Answers
D.

index=_internal component=DS* host=<ds> | stats count by message

D.

index=_internal component=DS* host=<ds> | stats count by message

Answers
Suggested answer: C

Explanation:

The index=_internal component=DC* host=<uf> search will show all deployment client messages from the universal forwarder. The component field indicates the type of Splunk component that generated the message, and the host field indicates the host name of the machine that sent the message. The index=_audit component=DC* host=<uf> search will not return any results, because the deployment client messages are not stored in the _audit index. The index=_internal component=DS* host=<ds> search will show the deployment server messages from the deployment server, not the client.The index=_audit component=DS* host=<ds> search will also not return any results, for the same reason as above

To optimize the distribution of primary buckets; when does primary rebalancing automatically occur? (Select all that apply.)

A.

Rolling restart completes.

A.

Rolling restart completes.

Answers
B.

Master node rejoins the cluster.

B.

Master node rejoins the cluster.

Answers
C.

Captain joins or rejoins cluster.

C.

Captain joins or rejoins cluster.

Answers
D.

A peer node joins or rejoins the cluster.

D.

A peer node joins or rejoins the cluster.

Answers
Suggested answer: A, B, D

Explanation:

Primary rebalancing automatically occurs when a rolling restart completes, a master node rejoins the cluster, or a peer node joins or rejoins the cluster. These events can cause the distribution of primary buckets to become unbalanced, so the master node will initiate a rebalancing process to ensure that each peer node has roughly the same number of primary buckets. Primary rebalancing does not occur when a captain joins or rejoins the cluster, because the captain is a search head cluster component, not an indexer cluster component.The captain is responsible for search head clustering, not indexer clustering

Which search head cluster component is responsible for pushing knowledge bundles to search peers, replicating configuration changes to search head cluster members, and scheduling jobs across the search head cluster?

A.

Master

A.

Master

Answers
B.

Captain

B.

Captain

Answers
C.

Deployer

C.

Deployer

Answers
D.

Deployment server

D.

Deployment server

Answers
Suggested answer: B

Explanation:

The captain is the search head cluster component that is responsible for pushing knowledge bundles to search peers, replicating configuration changes to search head cluster members, and scheduling jobs across the search head cluster. The captain is elected from among the search head cluster members and performs these tasks in addition to serving search requests. The master is the indexer cluster component that is responsible for managing the replication and availability of data across the peer nodes. The deployer is the standalone instance that is responsible for distributing apps and other configurations to the search head cluster members.The deployment server is the instance that is responsible for distributing apps and other configurations to the deployment clients, such as forwarders

Configurations from the deployer are merged into which location on the search head cluster member?

A.

SPLUNK_HOME/etc/system/local

A.

SPLUNK_HOME/etc/system/local

Answers
B.

SPLUNK_HOME/etc/apps/APP_HOME/local

B.

SPLUNK_HOME/etc/apps/APP_HOME/local

Answers
C.

SPLUNK_HOME/etc/apps/search/default

C.

SPLUNK_HOME/etc/apps/search/default

Answers
D.

SPLUNK_HOME/etc/apps/APP_HOME/default

D.

SPLUNK_HOME/etc/apps/APP_HOME/default

Answers
Suggested answer: B

Explanation:

Configurations from the deployer are merged into the SPLUNK_HOME/etc/apps/APP_HOME/local directory on the search head cluster member. The deployer distributes apps and other configurations to the search head cluster members in the form of a configuration bundle. The configuration bundle contains the contents of the SPLUNK_HOME/etc/shcluster/apps directory on the deployer. When a search head cluster member receives the configuration bundle, it merges the contents of the bundle into its own SPLUNK_HOME/etc/apps directory. The configurations in the local directory take precedence over the configurations in the default directory. The SPLUNK_HOME/etc/system/local directory is used for system-level configurations, not app-level configurations. The SPLUNK_HOME/etc/apps/search/default directory is used for the default configurations of the search app, not the configurations from the deployer.

When Splunk indexes data in a non-clustered environment, what kind of files does it create by default?

A.

Index and .tsidx files.

A.

Index and .tsidx files.

Answers
B.

Rawdata and index files.

B.

Rawdata and index files.

Answers
C.

Compressed and .tsidx files.

C.

Compressed and .tsidx files.

Answers
D.

Compressed and meta data files.

D.

Compressed and meta data files.

Answers
Suggested answer: A

Explanation:

When Splunk indexes data in a non-clustered environment, it creates index and .tsidx files by default. The index files contain the raw data that Splunk has ingested, compressed and encrypted. The .tsidx files contain the time-series index that maps the timestamps and event IDs of the raw data. The rawdata and index files are not the correct terms for the files that Splunk creates. The compressed and .tsidx files are partially correct, but compressed is not the proper name for the index files. The compressed and meta data files are also partially correct, but meta data is not the proper name for the .tsidx files.

How does IT Service Intelligence (ITSI) impact the planning of a Splunk deployment?

A.

ITSI requires a dedicated deployment server.

A.

ITSI requires a dedicated deployment server.

Answers
B.

The amount of users using ITSI will not impact performance.

B.

The amount of users using ITSI will not impact performance.

Answers
C.

ITSI in a Splunk deployment does not require additional hardware resources.

C.

ITSI in a Splunk deployment does not require additional hardware resources.

Answers
D.

Depending on the Key Performance Indicators that are being tracked, additional infrastructure may be needed.

D.

Depending on the Key Performance Indicators that are being tracked, additional infrastructure may be needed.

Answers
Suggested answer: D

Explanation:

ITSI can impact the planning of a Splunk deployment depending on the Key Performance Indicators (KPIs) that are being tracked. KPIs are metrics that measure the health and performance of IT services and business processes. ITSI collects, analyzes, and displays KPI data from various data sources in Splunk. Depending on the number, frequency, and complexity of the KPIs, additional infrastructure may be needed to support the data ingestion, processing, and visualization. ITSI does not require a dedicated deployment server, nor does it affect the number of users using ITSI.ITSI in a Splunk deployment does require additional hardware resources, such as CPU, memory, and disk space, to run the ITSI components and apps

In the deployment planning process, when should a person identify who gets to see network data?

A.

Deployment schedule

A.

Deployment schedule

Answers
B.

Topology diagramming

B.

Topology diagramming

Answers
C.

Data source inventory

C.

Data source inventory

Answers
D.

Data policy definition

D.

Data policy definition

Answers
Suggested answer: D

Explanation:

In the deployment planning process, a person should identify who gets to see network data in the data policy definition step. This step involves defining the data access policies and permissions for different users and roles in Splunk. The deployment schedule step involves defining the timeline and milestones for the deployment project. The topology diagramming step involves creating a visual representation of the Splunk architecture and components.The data source inventory step involves identifying and documenting the data sources and types that will be ingested by Splunk

The KV store forms its own cluster within a SHC. What is the maximum number of SHC members KV store will form?

A.

25

A.

25

Answers
B.

50

B.

50

Answers
C.

100

C.

100

Answers
D.

Unlimited

D.

Unlimited

Answers
Suggested answer: B

Explanation:

The KV store forms its own cluster within a SHC. The maximum number of SHC members KV store will form is 50. The KV store cluster is a subset of the SHC members that are responsible for replicating and storing the KV store data. The KV store cluster can have up to 50 members, but only 20 of them can be active at any given time. The other members are standby members that can take over if an active member fails. The KV store cluster cannot have more than 50 members, nor can it have an unlimited number of members.The KV store cluster cannot have 25 or 100 members, because these numbers are not multiples of 5, which is the minimum replication factor for the KV store cluster

In search head clustering, which of the following methods can you use to transfer captaincy to a different member? (Select all that apply.)

A.

Use the Monitoring Console.

A.

Use the Monitoring Console.

Answers
B.

Use the Search Head Clustering settings menu from Splunk Web on any member.

B.

Use the Search Head Clustering settings menu from Splunk Web on any member.

Answers
C.

Run the splunk transfer shcluster-captain command from the current captain.

C.

Run the splunk transfer shcluster-captain command from the current captain.

Answers
D.

Run the splunk transfer shcluster-captain command from the member you would like to become the captain.

D.

Run the splunk transfer shcluster-captain command from the member you would like to become the captain.

Answers
Suggested answer: B, D

Explanation:

In search head clustering, there are two methods to transfer captaincy to a different member. One method is to use the Search Head Clustering settings menu from Splunk Web on any member. This method allows the user to select a specific member to become the new captain, or to let Splunk choose the best candidate. The other method is to run the splunk transfer shcluster-captain command from the member that the user wants to become the new captain. This method requires the user to know the name of the target member and to have access to the CLI of that member. Using the Monitoring Console is not a method to transfer captaincy, because the Monitoring Console does not have the option to change the captain.Running the splunk transfer shcluster-captain command from the current captain is not a method to transfer captaincy, because this command will fail with an error message

Total 160 questions
Go to page: of 16