Splunk SPLK-2002 Practice Test - Questions Answers, Page 10

List of questions
Question 91

Which of the following statements describe search head clustering? (Select all that apply.)
A deployer is required.
At least three search heads are needed.
Search heads must meet the high-performance reference server requirements.
The deployer must have sufficient CPU and network resources to process service requests and push configurations.
Search head clustering is a Splunk feature that allows a group of search heads to share configurations, apps, and knowledge objects, and to provide high availability and scalability for searching. Search head clustering has the following characteristics:
A deployer is required. A deployer is a Splunk instance that distributes the configurations and apps to the members of the search head cluster. The deployer is not a member of the cluster, but a separate instance that communicates with the cluster master.
At least three search heads are needed. A search head cluster must have at least three search heads to form a quorum and to ensure high availability. If the cluster has less than three search heads, it cannot function properly and will enter a degraded mode.
The deployer must have sufficient CPU and network resources to process service requests and push configurations. The deployer is responsible for handling the requests from the cluster master and the cluster members, and for pushing the configurations and apps to the cluster members. Therefore, the deployer must have enough CPU and network resources to perform these tasks efficiently and reliably.
Search heads do not need to meet the high-performance reference server requirements, as this is not a mandatory condition for search head clustering. The high-performance reference server requirements are only recommended for optimal performance and scalability of Splunk deployments, but they are not enforced by Splunk.
Question 92

Which of the following tasks should the architect perform when building a deployment plan? (Select all that apply.)
Use case checklist.
Install Splunk apps.
Inventory data sources.
Review network topology.
When building a deployment plan, the architect should perform the following tasks:
Use case checklist. A use case checklist is a document that lists the use cases that the deployment will support, along with the data sources, the data volume, the data retention, the data model, the dashboards, the reports, the alerts, and the roles and permissions for each use case.A use case checklist helps to define the scope and the functionality of the deployment, and to identify the dependencies and the requirements for each use case1
Inventory data sources. An inventory of data sources is a document that lists the data sources that the deployment will ingest, along with the data type, the data format, the data location, the data collection method, the data volume, the data frequency, and the data owner for each data source.An inventory of data sources helps to determine the data ingestion strategy, the data parsing and enrichment, the data storage and retention, and the data security and compliance for the deployment1
Review network topology. A review of network topology is a process that examines the network infrastructure and the network connectivity of the deployment, along with the network bandwidth, the network latency, the network security, and the network monitoring for the deployment.A review of network topology helps to optimize the network performance and reliability, and to identify the network risks and mitigations for the deployment1
Installing Splunk apps is not a task that the architect should perform when building a deployment plan, as it is a task that the administrator should perform when implementing the deployment plan.Installing Splunk apps is a technical activity that requires access to the Splunk instances and the Splunk configurations, which are not available at the planning stage
Question 93

Because Splunk indexing is read/write intensive, it is important to select the appropriate disk storage solution for each deployment. Which of the following statements is accurate about disk storage?
High performance SAN should never be used.
Enable NFS for storing hot and warm buckets.
The recommended RAID setup is RAID 10 (1 + 0).
Virtualized environments are usually preferred over bare metal for Splunk indexers.
Splunk indexing is read/write intensive, as it involves reading data from various sources, writing data to disk, and reading data from disk for searching and reporting. Therefore, it is important to select the appropriate disk storage solution for each deployment, based on the performance, reliability, and cost requirements. The recommended RAID setup for Splunk indexers is RAID 10 (1 + 0), as it provides the best balance of performance and reliability. RAID 10 combines the advantages of RAID 1 (mirroring) and RAID 0 (striping), which means that it offers both data redundancy and data distribution.RAID 10 can tolerate multiple disk failures, as long as they are not in the same mirrored pair, and it can improve the read and write speed, as it can access multiple disks in parallel2
High performance SAN (Storage Area Network) can be used for Splunk indexers, but it is not recommended, as it is more expensive and complex than local disks. SAN also introduces additional network latency and dependency, which can affect the performance and availability of Splunk indexers.SAN is more suitable for Splunk search heads, as they are less read/write intensive and more CPU intensive2
NFS (Network File System) should not be used for storing hot and warm buckets, as it can cause data corruption, data loss, and performance degradation. NFS is a network-based file system that allows multiple clients to access the same files on a remote server. NFS is not compatible with Splunk index replication and search head clustering, as it can cause conflicts and inconsistencies among the Splunk instances. NFS is also slower and less reliable than local disks, as it depends on the network bandwidth and availability.NFS can be used for storing cold and frozen buckets, as they are less frequently accessed and less critical for Splunk operations2
Virtualized environments are not usually preferred over bare metal for Splunk indexers, as they can introduce additional overhead and complexity. Virtualized environments can affect the performance and reliability of Splunk indexers, as they share the physical resources and the network with other virtual machines. Virtualized environments can also complicate the monitoring and troubleshooting of Splunk indexers, as they add another layer of abstraction and configuration.Virtualized environments can be used for Splunk indexers, but they require careful planning and tuning to ensure optimal performance and availability2
Question 94

Which of the following are possible causes of a crash in Splunk? (select all that apply)
Incorrect ulimit settings.
Insufficient disk IOPS.
Insufficient memory.
Running out of disk space.
All of the options are possible causes of a crash in Splunk.According to the Splunk documentation1, incorrect ulimit settings can lead to file descriptor exhaustion, which can cause Splunk to crash or hang.Insufficient disk IOPS can also cause Splunk to crash or become unresponsive, as Splunk relies heavily on disk performance2.Insufficient memory can cause Splunk to run out of memory and crash, especially when running complex searches or handling large volumes of data3.Running out of disk space can cause Splunk to stop indexing data and crash, as Splunk needs enough disk space to store its data and logs4.
1: Configure ulimit settings for Splunk Enterprise2: Troubleshoot Splunk performance issues3: Troubleshoot memory usage4: Troubleshoot disk space issues
Question 95

Which of the following strongly impacts storage sizing requirements for Enterprise Security?
The number of scheduled (correlation) searches.
The number of Splunk users configured.
The number of source types used in the environment.
The number of Data Models accelerated.
Data Model acceleration is a feature that enables faster searches over large data sets by summarizing the raw data into a more efficient format. Data Model acceleration consumes additional disk space, as it stores both the raw data and the summarized data. The amount of disk space required depends on the size and complexity of the Data Model, the retention period of the summarized data, and the compression ratio of the data. According to the Splunk Enterprise Security Planning and Installation Manual, Data Model acceleration is one of the factors that strongly impacts storage sizing requirements for Enterprise Security. The other factors are the volume and type of data sources, the retention policy of the data, and the replication factor and search factor of the index cluster.The number of scheduled (correlation) searches, the number of Splunk users configured, and the number of source types used in the environment are not directly related to storage sizing requirements for Enterprise Security1
1: https://docs.splunk.com/Documentation/ES/6.6.0/Install/Plan#Storage_sizing_requirements
Question 96

Which of the following is true regarding the migration of an index cluster from single-site to multi-site?
Question 97

What information is written to the __introspection log file?
Question 98

A customer has a four site indexer cluster. The customer has requirements to store five copies of searchable data, with one searchable copy of data at the origin site, and one searchable copy at the disaster recovery site (site4).
Which configuration meets these requirements?
Question 99

Which of the following server. conf stanzas indicates the Indexer Discovery feature has not been fully configured (restart pending) on the Master Node?
A)
B)
C)
D)
Question 100

A customer currently has many deployment clients being managed by a single, dedicated deployment server. The customer plans to double the number of clients.
What could be done to minimize performance issues?
Question