ExamGecko
Home Home / Splunk / SPLK-2002

SPLK-2002: Splunk Enterprise Certified Architect

Splunk Enterprise Certified Architect
Vendor:

Splunk

Splunk Enterprise Certified Architect Exam Questions: 160
Splunk Enterprise Certified Architect   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

Exam Number: SPLK-2002

Exam Name: Splunk Enterprise Certified Architect

Length of test: 90 mins

Exam Format: Multiple-choice questions.

Exam Language: English

Number of questions in the actual exam: 85 questions

Passing Score: 70%

Topics Covered:

  1. Splunk Deployment Methodology: Best practices for planning, data collection, and sizing a distributed deployment.

  2. Indexer and Search Head Clustering: Managing and troubleshooting standard deployments.

  3. Data Collection and Indexing: Handling data sources and ensuring efficient data collection and indexing.

  4. Search and Reporting: Performing searches, utilizing field transformations, and creating knowledge objects.

  5. Troubleshooting: Identifying and resolving issues in a Splunk Enterprise deployment

This study guide should help you understand what to expect on SPLK-2002 exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.

Related questions

Which of the following statements describe search head clustering? (Select all that apply.)

A.

A deployer is required.

A.

A deployer is required.

Answers
B.

At least three search heads are needed.

B.

At least three search heads are needed.

Answers
C.

Search heads must meet the high-performance reference server requirements.

C.

Search heads must meet the high-performance reference server requirements.

Answers
D.

The deployer must have sufficient CPU and network resources to process service requests and push configurations.

D.

The deployer must have sufficient CPU and network resources to process service requests and push configurations.

Answers
Suggested answer: A, B, D

Explanation:

Search head clustering is a Splunk feature that allows a group of search heads to share configurations, apps, and knowledge objects, and to provide high availability and scalability for searching. Search head clustering has the following characteristics:

A deployer is required. A deployer is a Splunk instance that distributes the configurations and apps to the members of the search head cluster. The deployer is not a member of the cluster, but a separate instance that communicates with the cluster master.

At least three search heads are needed. A search head cluster must have at least three search heads to form a quorum and to ensure high availability. If the cluster has less than three search heads, it cannot function properly and will enter a degraded mode.

The deployer must have sufficient CPU and network resources to process service requests and push configurations. The deployer is responsible for handling the requests from the cluster master and the cluster members, and for pushing the configurations and apps to the cluster members. Therefore, the deployer must have enough CPU and network resources to perform these tasks efficiently and reliably.

Search heads do not need to meet the high-performance reference server requirements, as this is not a mandatory condition for search head clustering. The high-performance reference server requirements are only recommended for optimal performance and scalability of Splunk deployments, but they are not enforced by Splunk.

asked 13/11/2024
Kevin Klyn
38 questions

Which component in the splunkd.log will log information related to bad event breaking?

A.

Audittrail

A.

Audittrail

Answers
B.

EventBreaking

B.

EventBreaking

Answers
C.

IndexingPipeline

C.

IndexingPipeline

Answers
D.

AggregatorMiningProcessor

D.

AggregatorMiningProcessor

Answers
Suggested answer: D

Explanation:

The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information, seeAbout Splunk Enterprise loggingand [Configure event line breaking] in the Splunk documentation.

asked 13/11/2024
Jesserey Joseph
44 questions

What does the deployer do in a Search Head Cluster (SHC)? (Select all that apply.)

A.

Distributes apps to SHC members.

A.

Distributes apps to SHC members.

Answers
B.

Bootstraps a clean Splunk install for a SHC.

B.

Bootstraps a clean Splunk install for a SHC.

Answers
C.

Distributes non-search-related and manual configuration file changes.

C.

Distributes non-search-related and manual configuration file changes.

Answers
D.

Distributes runtime knowledge object changes made by users across the SHC.

D.

Distributes runtime knowledge object changes made by users across the SHC.

Answers
Suggested answer: A, C

Explanation:

The deployer distributes apps and non-search related and manual configuration file changes to the search head cluster members. The deployer does not bootstrap a clean Splunk install for a search head cluster, as this is done by the captain. The deployer also does not distribute runtime knowledge object changes made by users across the search head cluster, as this is done by the replication factor. For more information, seeUse the deployer to distribute apps and configuration updatesin the Splunk documentation.

asked 13/11/2024
Aleksandar Jovasevic
45 questions

Which of the following strongly impacts storage sizing requirements for Enterprise Security?

A.

The number of scheduled (correlation) searches.

A.

The number of scheduled (correlation) searches.

Answers
B.

The number of Splunk users configured.

B.

The number of Splunk users configured.

Answers
C.

The number of source types used in the environment.

C.

The number of source types used in the environment.

Answers
D.

The number of Data Models accelerated.

D.

The number of Data Models accelerated.

Answers
Suggested answer: D

Explanation:

Data Model acceleration is a feature that enables faster searches over large data sets by summarizing the raw data into a more efficient format. Data Model acceleration consumes additional disk space, as it stores both the raw data and the summarized data. The amount of disk space required depends on the size and complexity of the Data Model, the retention period of the summarized data, and the compression ratio of the data. According to the Splunk Enterprise Security Planning and Installation Manual, Data Model acceleration is one of the factors that strongly impacts storage sizing requirements for Enterprise Security. The other factors are the volume and type of data sources, the retention policy of the data, and the replication factor and search factor of the index cluster.The number of scheduled (correlation) searches, the number of Splunk users configured, and the number of source types used in the environment are not directly related to storage sizing requirements for Enterprise Security1

1: https://docs.splunk.com/Documentation/ES/6.6.0/Install/Plan#Storage_sizing_requirements

asked 13/11/2024
saiming wong
37 questions

When should a Universal Forwarder be used instead of a Heavy Forwarder?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

The master node distributes configuration bundles to peer nodes. Which directory peer nodes receive the bundles?

A.

apps

A.

apps

Answers
B.

deployment-apps

B.

deployment-apps

Answers
C.

slave-apps

C.

slave-apps

Answers
D.

master-apps

D.

master-apps

Answers
Suggested answer: C

Explanation:

The master node distributes configuration bundles to peer nodes in theslave-appsdirectory under$SPLUNK_HOME/etc. The configuration bundle method is the only supported method for managing common configurations and app deployment across the set of peers.It ensures that all peers use the same versions of these files1.Bundles typically contain a subset of files (configuration files and assets) from$SPLUNK_HOME/etc/system,$SPLUNK_HOME/etc/apps, and$SPLUNK_HOME/etc/users2.The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head's apps3.

asked 13/11/2024
Sterling White
47 questions

Which of the following server. conf stanzas indicates the Indexer Discovery feature has not been fully configured (restart pending) on the Master Node?

A)

B)

C)

D)

A.

Option A

A.

Option A

Answers
B.

Option B

B.

Option B

Answers
C.

Option C

C.

Option C

Answers
D.

Option D

D.

Option D

Answers
Suggested answer: A

Explanation:

The Indexer Discovery feature enables forwarders to dynamically connect to the available peer nodes in an indexer cluster. To use this feature, the manager node must be configured with the [indexer_discovery] stanza and a pass4SymmKey value. The forwarders must also be configured with the same pass4SymmKey value and the master_uri of the manager node. The pass4SymmKey value must be encrypted using the splunk _encrypt command. Therefore, option A indicates that the Indexer Discovery feature has not been fully configured on the manager node, because the pass4SymmKey value is not encrypted. The other options are not related to the Indexer Discovery feature. Option B shows the configuration of a forwarder that is part of an indexer cluster. Option C shows the configuration of a manager node that is part of an indexer cluster.Option D shows an invalid configuration of the [indexer_discovery] stanza, because the pass4SymmKey value is not encrypted and does not match the forwarders' pass4SymmKey value12

1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/indexerdiscovery2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Secureyourconfigurationfiles#Encrypt_the_pass4SymmKey_setting_in_server.conf

asked 13/11/2024
Ana Santos
40 questions

A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?

A.

Two indexers not in a cluster, assuming users run many long searches.

A.

Two indexers not in a cluster, assuming users run many long searches.

Answers
B.

Three indexers not in a cluster, assuming a long data retention period.

B.

Three indexers not in a cluster, assuming a long data retention period.

Answers
C.

Two indexers clustered, assuming high availability is the greatest priority.

C.

Two indexers clustered, assuming high availability is the greatest priority.

Answers
D.

Two indexers clustered, assuming a high volume of saved/scheduled searches.

D.

Two indexers clustered, assuming a high volume of saved/scheduled searches.

Answers
Suggested answer: C

Explanation:

Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer's needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer's data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.

asked 13/11/2024
Vishal Sahare
44 questions

Search dashboards in the Monitoring Console indicate that the distributed deployment is approaching its capacity. Which of the following options will provide the most search performance improvement?

A.

Replace the indexer storage to solid state drives (SSD).

A.

Replace the indexer storage to solid state drives (SSD).

Answers
B.

Add more search heads and redistribute users based on the search type.

B.

Add more search heads and redistribute users based on the search type.

Answers
C.

Look for slow searches and reschedule them to run during an off-peak time.

C.

Look for slow searches and reschedule them to run during an off-peak time.

Answers
D.

Add more search peers and make sure forwarders distribute data evenly across all indexers.

D.

Add more search peers and make sure forwarders distribute data evenly across all indexers.

Answers
Suggested answer: D

Explanation:

Adding more search peers and making sure forwarders distribute data evenly across all indexers will provide the most search performance improvement when the distributed deployment is approaching its capacity. Adding more search peers will increase the search concurrency and reduce the load on each indexer. Distributing data evenly across all indexers will ensure that the search workload is balanced and no indexer becomes a bottleneck. Replacing the indexer storage to SSD will improve the search performance, but it is a costly and time-consuming option. Adding more search heads will not improve the search performance if the indexers are the bottleneck. Rescheduling slow searches to run during an off-peak time will reduce the search contention, but it will not improve the search performance for each individual search. For more information, see [Scale your indexer cluster] and [Distribute data across your indexers] in the Splunk documentation.

asked 13/11/2024
charles ratchagaraj
43 questions

Which of the following will cause the greatest reduction in disk size requirements for a cluster of N indexers running Splunk Enterprise Security?

A.

Setting the cluster search factor to N-1.

A.

Setting the cluster search factor to N-1.

Answers
B.

Increasing the number of buckets per index.

B.

Increasing the number of buckets per index.

Answers
C.

Decreasing the data model acceleration range.

C.

Decreasing the data model acceleration range.

Answers
D.

Setting the cluster replication factor to N-1.

D.

Setting the cluster replication factor to N-1.

Answers
Suggested answer: C

Explanation:

Decreasing the data model acceleration range will reduce the disk size requirements for a cluster of indexers running Splunk Enterprise Security. Data model acceleration creates tsidx files that consume disk space on the indexers. Reducing the acceleration range will limit the amount of data that is accelerated and thus save disk space. Setting the cluster search factor or replication factor to N-1 will not reduce the disk size requirements, but rather increase the risk of data loss. Increasing the number of buckets per index will also increase the disk size requirements, as each bucket has a minimum size. For more information, seeData model accelerationandBucket sizein the Splunk documentation.

asked 13/11/2024
Igor Vasiliev
44 questions