ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Which of the following statements describe search head clustering? (Select all that apply.)

A.

A deployer is required.

A.

A deployer is required.

Answers
B.

At least three search heads are needed.

B.

At least three search heads are needed.

Answers
C.

Search heads must meet the high-performance reference server requirements.

C.

Search heads must meet the high-performance reference server requirements.

Answers
D.

The deployer must have sufficient CPU and network resources to process service requests and push configurations.

D.

The deployer must have sufficient CPU and network resources to process service requests and push configurations.

Answers
Suggested answer: A, B, D

Explanation:

Search head clustering is a Splunk feature that allows a group of search heads to share configurations, apps, and knowledge objects, and to provide high availability and scalability for searching. Search head clustering has the following characteristics:

A deployer is required. A deployer is a Splunk instance that distributes the configurations and apps to the members of the search head cluster. The deployer is not a member of the cluster, but a separate instance that communicates with the cluster master.

At least three search heads are needed. A search head cluster must have at least three search heads to form a quorum and to ensure high availability. If the cluster has less than three search heads, it cannot function properly and will enter a degraded mode.

The deployer must have sufficient CPU and network resources to process service requests and push configurations. The deployer is responsible for handling the requests from the cluster master and the cluster members, and for pushing the configurations and apps to the cluster members. Therefore, the deployer must have enough CPU and network resources to perform these tasks efficiently and reliably.

Search heads do not need to meet the high-performance reference server requirements, as this is not a mandatory condition for search head clustering. The high-performance reference server requirements are only recommended for optimal performance and scalability of Splunk deployments, but they are not enforced by Splunk.

Which of the following tasks should the architect perform when building a deployment plan? (Select all that apply.)

A.

Use case checklist.

A.

Use case checklist.

Answers
B.

Install Splunk apps.

B.

Install Splunk apps.

Answers
C.

Inventory data sources.

C.

Inventory data sources.

Answers
D.

Review network topology.

D.

Review network topology.

Answers
Suggested answer: A, C, D

Explanation:

When building a deployment plan, the architect should perform the following tasks:

Use case checklist. A use case checklist is a document that lists the use cases that the deployment will support, along with the data sources, the data volume, the data retention, the data model, the dashboards, the reports, the alerts, and the roles and permissions for each use case.A use case checklist helps to define the scope and the functionality of the deployment, and to identify the dependencies and the requirements for each use case1

Inventory data sources. An inventory of data sources is a document that lists the data sources that the deployment will ingest, along with the data type, the data format, the data location, the data collection method, the data volume, the data frequency, and the data owner for each data source.An inventory of data sources helps to determine the data ingestion strategy, the data parsing and enrichment, the data storage and retention, and the data security and compliance for the deployment1

Review network topology. A review of network topology is a process that examines the network infrastructure and the network connectivity of the deployment, along with the network bandwidth, the network latency, the network security, and the network monitoring for the deployment.A review of network topology helps to optimize the network performance and reliability, and to identify the network risks and mitigations for the deployment1

Installing Splunk apps is not a task that the architect should perform when building a deployment plan, as it is a task that the administrator should perform when implementing the deployment plan.Installing Splunk apps is a technical activity that requires access to the Splunk instances and the Splunk configurations, which are not available at the planning stage

Because Splunk indexing is read/write intensive, it is important to select the appropriate disk storage solution for each deployment. Which of the following statements is accurate about disk storage?

A.

High performance SAN should never be used.

A.

High performance SAN should never be used.

Answers
B.

Enable NFS for storing hot and warm buckets.

B.

Enable NFS for storing hot and warm buckets.

Answers
C.

The recommended RAID setup is RAID 10 (1 + 0).

C.

The recommended RAID setup is RAID 10 (1 + 0).

Answers
D.

Virtualized environments are usually preferred over bare metal for Splunk indexers.

D.

Virtualized environments are usually preferred over bare metal for Splunk indexers.

Answers
Suggested answer: C

Explanation:

Splunk indexing is read/write intensive, as it involves reading data from various sources, writing data to disk, and reading data from disk for searching and reporting. Therefore, it is important to select the appropriate disk storage solution for each deployment, based on the performance, reliability, and cost requirements. The recommended RAID setup for Splunk indexers is RAID 10 (1 + 0), as it provides the best balance of performance and reliability. RAID 10 combines the advantages of RAID 1 (mirroring) and RAID 0 (striping), which means that it offers both data redundancy and data distribution.RAID 10 can tolerate multiple disk failures, as long as they are not in the same mirrored pair, and it can improve the read and write speed, as it can access multiple disks in parallel2

High performance SAN (Storage Area Network) can be used for Splunk indexers, but it is not recommended, as it is more expensive and complex than local disks. SAN also introduces additional network latency and dependency, which can affect the performance and availability of Splunk indexers.SAN is more suitable for Splunk search heads, as they are less read/write intensive and more CPU intensive2

NFS (Network File System) should not be used for storing hot and warm buckets, as it can cause data corruption, data loss, and performance degradation. NFS is a network-based file system that allows multiple clients to access the same files on a remote server. NFS is not compatible with Splunk index replication and search head clustering, as it can cause conflicts and inconsistencies among the Splunk instances. NFS is also slower and less reliable than local disks, as it depends on the network bandwidth and availability.NFS can be used for storing cold and frozen buckets, as they are less frequently accessed and less critical for Splunk operations2

Virtualized environments are not usually preferred over bare metal for Splunk indexers, as they can introduce additional overhead and complexity. Virtualized environments can affect the performance and reliability of Splunk indexers, as they share the physical resources and the network with other virtual machines. Virtualized environments can also complicate the monitoring and troubleshooting of Splunk indexers, as they add another layer of abstraction and configuration.Virtualized environments can be used for Splunk indexers, but they require careful planning and tuning to ensure optimal performance and availability2

Which of the following are possible causes of a crash in Splunk? (select all that apply)

A.

Incorrect ulimit settings.

A.

Incorrect ulimit settings.

Answers
B.

Insufficient disk IOPS.

B.

Insufficient disk IOPS.

Answers
C.

Insufficient memory.

C.

Insufficient memory.

Answers
D.

Running out of disk space.

D.

Running out of disk space.

Answers
Suggested answer: A, B, C, D

Explanation:

All of the options are possible causes of a crash in Splunk.According to the Splunk documentation1, incorrect ulimit settings can lead to file descriptor exhaustion, which can cause Splunk to crash or hang.Insufficient disk IOPS can also cause Splunk to crash or become unresponsive, as Splunk relies heavily on disk performance2.Insufficient memory can cause Splunk to run out of memory and crash, especially when running complex searches or handling large volumes of data3.Running out of disk space can cause Splunk to stop indexing data and crash, as Splunk needs enough disk space to store its data and logs4.

1: Configure ulimit settings for Splunk Enterprise2: Troubleshoot Splunk performance issues3: Troubleshoot memory usage4: Troubleshoot disk space issues

Which of the following strongly impacts storage sizing requirements for Enterprise Security?

A.

The number of scheduled (correlation) searches.

A.

The number of scheduled (correlation) searches.

Answers
B.

The number of Splunk users configured.

B.

The number of Splunk users configured.

Answers
C.

The number of source types used in the environment.

C.

The number of source types used in the environment.

Answers
D.

The number of Data Models accelerated.

D.

The number of Data Models accelerated.

Answers
Suggested answer: D

Explanation:

Data Model acceleration is a feature that enables faster searches over large data sets by summarizing the raw data into a more efficient format. Data Model acceleration consumes additional disk space, as it stores both the raw data and the summarized data. The amount of disk space required depends on the size and complexity of the Data Model, the retention period of the summarized data, and the compression ratio of the data. According to the Splunk Enterprise Security Planning and Installation Manual, Data Model acceleration is one of the factors that strongly impacts storage sizing requirements for Enterprise Security. The other factors are the volume and type of data sources, the retention policy of the data, and the replication factor and search factor of the index cluster.The number of scheduled (correlation) searches, the number of Splunk users configured, and the number of source types used in the environment are not directly related to storage sizing requirements for Enterprise Security1

1: https://docs.splunk.com/Documentation/ES/6.6.0/Install/Plan#Storage_sizing_requirements

Which of the following is true regarding the migration of an index cluster from single-site to multi-site?

A.

Multi-site policies will apply to all data in the indexer cluster.

A.

Multi-site policies will apply to all data in the indexer cluster.

Answers
B.

All peer nodes must be running the same version of Splunk.

B.

All peer nodes must be running the same version of Splunk.

Answers
C.

Existing single-site attributes must be removed.

C.

Existing single-site attributes must be removed.

Answers
D.

Single-site buckets cannot be converted to multi-site buckets.

D.

Single-site buckets cannot be converted to multi-site buckets.

Answers
Suggested answer: C

Explanation:

According to the Splunk documentation1, when migrating an indexer cluster from single-site to multi-site, you must remove the existing single-site attributes from the server.conf file of each peer node. These attributes include replication_factor, search_factor, and cluster_label. You must also restart each peer node after removing the attributes. The other options are false because:

Multi-site policies will apply only to the data created after migration, unless you configure the manager node to convert legacy buckets to multi-site1.

All peer nodes do not need to run the same version of Splunk, as long as they are compatible with the manager node2.

Single-site buckets can be converted to multi-site buckets by changing the constrain_singlesite_buckets setting in the manager node's server.conf file to 'false'1.

What information is written to the __introspection log file?

A.

File monitor input configurations.

A.

File monitor input configurations.

Answers
B.

File monitor checkpoint offset.

B.

File monitor checkpoint offset.

Answers
C.

User activities and knowledge objects.

C.

User activities and knowledge objects.

Answers
D.

KV store performance.

D.

KV store performance.

Answers
Suggested answer: D

Explanation:

The __introspection log file contains data about the impact of the Splunk software on the host system, such as CPU, memory, disk, and network usage, as well as KV store performance1.This log file is monitored by default and the contents are sent to the _introspection index1. The other options are not related to the __introspection log file.File monitor input configurations are stored in inputs.conf2.File monitor checkpoint offset is stored in fishbucket3.User activities and knowledge objects are stored in the _audit and _internal indexes respectively4.

A customer has a four site indexer cluster. The customer has requirements to store five copies of searchable data, with one searchable copy of data at the origin site, and one searchable copy at the disaster recovery site (site4).

Which configuration meets these requirements?

A.

site_replication_factor = origin:2, site4:l, total:3

A.

site_replication_factor = origin:2, site4:l, total:3

Answers
B.

site_replication_factor = origin:l, site4:l, total:5

B.

site_replication_factor = origin:l, site4:l, total:5

Answers
C.

site_search_factor = origin:2, site4:l, total:3

C.

site_search_factor = origin:2, site4:l, total:3

Answers
D.

site search factor = origin:1, site4:l, total:5

D.

site search factor = origin:1, site4:l, total:5

Answers
Suggested answer: B

Explanation:

The correct configuration to meet the customer's requirements issite_replication_factor = origin:1, site4:1, total:5. This means that each bucket will have one copy at the origin site, one copy at the disaster recovery site (site4), and three copies at any other sites. The total number of copies will be five, as required by the customer.The site_replication_factor determines how many copies of each bucket are stored across the sites in a multisite indexer cluster1.The site_search_factor determines how many copies of each bucket are searchable across the sites in a multisite indexer cluster2. Therefore, option B is the correct answer, and options A, C, and D are incorrect.

1: Configure the site replication factor2: Configure the site search factor

Which of the following server. conf stanzas indicates the Indexer Discovery feature has not been fully configured (restart pending) on the Master Node?

A)

B)

C)

D)

A.

Option A

A.

Option A

Answers
B.

Option B

B.

Option B

Answers
C.

Option C

C.

Option C

Answers
D.

Option D

D.

Option D

Answers
Suggested answer: A

Explanation:

The Indexer Discovery feature enables forwarders to dynamically connect to the available peer nodes in an indexer cluster. To use this feature, the manager node must be configured with the [indexer_discovery] stanza and a pass4SymmKey value. The forwarders must also be configured with the same pass4SymmKey value and the master_uri of the manager node. The pass4SymmKey value must be encrypted using the splunk _encrypt command. Therefore, option A indicates that the Indexer Discovery feature has not been fully configured on the manager node, because the pass4SymmKey value is not encrypted. The other options are not related to the Indexer Discovery feature. Option B shows the configuration of a forwarder that is part of an indexer cluster. Option C shows the configuration of a manager node that is part of an indexer cluster.Option D shows an invalid configuration of the [indexer_discovery] stanza, because the pass4SymmKey value is not encrypted and does not match the forwarders' pass4SymmKey value12

1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/indexerdiscovery2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Secureyourconfigurationfiles#Encrypt_the_pass4SymmKey_setting_in_server.conf

A customer currently has many deployment clients being managed by a single, dedicated deployment server. The customer plans to double the number of clients.

What could be done to minimize performance issues?

A.

Modify deploymentclient. conf to change from a Pull to Push mechanism.

A.

Modify deploymentclient. conf to change from a Pull to Push mechanism.

Answers
B.

Reduce the number of apps in the Manager Node repository.

B.

Reduce the number of apps in the Manager Node repository.

Answers
C.

Increase the current deployment client phone home interval.

C.

Increase the current deployment client phone home interval.

Answers
D.

Decrease the current deployment client phone home interval.

D.

Decrease the current deployment client phone home interval.

Answers
Suggested answer: C

Explanation:

According to the Splunk documentation1, increasing the current deployment client phone home interval can minimize performance issues by reducing the frequency of communication between the clients and the deployment server. This can also reduce the network traffic and the load on the deployment server. The other options are false because:

Modifying deploymentclient.conf to change from a Pull to Push mechanism is not possible, as Splunk does not support a Push mechanism for deployment server2.

Reducing the number of apps in the Manager Node repository will not affect the performance of the deployment server, as the apps are only downloaded when there is a change in the configuration or a new app is added3.

Decreasing the current deployment client phone home interval will increase the performance issues, as it will increase the frequency of communication between the clients and the deployment server, resulting in more network traffic and load on the deployment server1.

Total 160 questions
Go to page: of 16