ExamGecko
Home Home / Splunk / SPLK-2002

Splunk SPLK-2002 Practice Test - Questions Answers, Page 9

Question list
Search
Search

In a distributed environment, knowledge object bundles are replicated from the search head to which location on the search peer(s)?

A.

SPLUNK_HOME/var/lib/searchpeers

A.

SPLUNK_HOME/var/lib/searchpeers

Answers
B.

SPLUNK_HOME/var/log/searchpeers

B.

SPLUNK_HOME/var/log/searchpeers

Answers
C.

SPLUNK_HOME/var/run/searchpeers

C.

SPLUNK_HOME/var/run/searchpeers

Answers
D.

SPLUNK_HOME/var/spool/searchpeers

D.

SPLUNK_HOME/var/spool/searchpeers

Answers
Suggested answer: C

Explanation:

In a distributed environment, knowledge object bundles are replicated from the search head to the SPLUNK_HOME/var/run/searchpeers directory on the search peer(s). A knowledge object bundle is a compressed file that contains the knowledge objects, such as fields, lookups, macros, and tags, that are required for a search. A search peer is a Splunk instance that provides data to a search head in a distributed search. A search head is a Splunk instance that coordinates and executes a search across multiple search peers. When a search head initiates a search, it creates a knowledge object bundle and replicates it to the search peers that are involved in the search. The search peers store the knowledge object bundle in the SPLUNK_HOME/var/run/searchpeers directory, which is a temporary directory that is cleared when the Splunk service restarts. The search peers use the knowledge object bundle to apply the knowledge objects to the data and return the results to the search head.The SPLUNK_HOME/var/lib/searchpeers, SPLUNK_HOME/var/log/searchpeers, and SPLUNK_HOME/var/spool/searchpeers directories are not the locations where the knowledge object bundles are replicated, because they do not exist in the Splunk file system

Consider a use case involving firewall data. There is no Splunk-supported Technical Add-On, but the vendor has built one. What are the items that must be evaluated before installing the add-on? (Select all that apply.)

A.

Identify number of scheduled or real-time searches.

A.

Identify number of scheduled or real-time searches.

Answers
B.

Validate if this Technical Add-On enables event data for a data model.

B.

Validate if this Technical Add-On enables event data for a data model.

Answers
C.

Identify the maximum number of forwarders Technical Add-On can support.

C.

Identify the maximum number of forwarders Technical Add-On can support.

Answers
D.

Verify if Technical Add-On needs to be installed onto both a search head or indexer.

D.

Verify if Technical Add-On needs to be installed onto both a search head or indexer.

Answers
Suggested answer: A, B

Explanation:

A Technical Add-On (TA) is a Splunk app that contains configurations for data collection, parsing, and enrichment. It can also enable event data for a data model, which is useful for creating dashboards and reports. Therefore, before installing a TA, it is important to identify the number of scheduled or real-time searches that will use the data model, and to validate if the TA enables event data for a data model. The number of forwarders that the TA can support is not relevant, as the TA is installed on the indexer or search head, not on the forwarder.The installation location of the TA depends on the type of data and the use case, so it is not a fixed requirement

When configuring a Splunk indexer cluster, what are the default values for replication and search factor?

A.

replication_factor = 2search_factor = 2

A.

replication_factor = 2search_factor = 2

Answers
B.

replication_factor = 2search factor = 3

B.

replication_factor = 2search factor = 3

Answers
C.

replication_factor = 3search_factor = 2

C.

replication_factor = 3search_factor = 2

Answers
D.

replication_factor = 3search factor = 3

D.

replication_factor = 3search factor = 3

Answers
Suggested answer: C

Explanation:

The replication factor and the search factor are two important settings for a Splunk indexer cluster. The replication factor determines how many copies of each bucket are maintained across the set of peer nodes. The search factor determines how many searchable copies of each bucket are maintained.The default values for both settings are 3, which means that each bucket has three copies, and at least one of them is searchable

A Splunk user successfully extracted an ip address into a field called src_ip. Their colleague cannot see that field in their search results with events known to have src_ip. Which of the following may explain the problem? (Select all that apply.)

A.

The field was extracted as a private knowledge object.

A.

The field was extracted as a private knowledge object.

Answers
B.

The events are tagged as communicate, but are missing the network tag.

B.

The events are tagged as communicate, but are missing the network tag.

Answers
C.

The Typing Queue, which does regular expression replacements, is blocked.

C.

The Typing Queue, which does regular expression replacements, is blocked.

Answers
D.

The colleague did not explicitly use the field in the search and the search was set to Fast Mode.

D.

The colleague did not explicitly use the field in the search and the search was set to Fast Mode.

Answers
Suggested answer: A, D

Explanation:

The following may explain the problem of why a colleague cannot see the src_ip field in their search results: The field was extracted as a private knowledge object, and the colleague did not explicitly use the field in the search and the search was set to Fast Mode. A knowledge object is a Splunk entity that applies some knowledge or intelligence to the data, such as a field extraction, a lookup, or a macro. A knowledge object can have different permissions, such as private, app, or global. A private knowledge object is only visible to the user who created it, and it cannot be shared with other users. A field extraction is a type of knowledge object that extracts fields from the raw data at index time or search time. If a field extraction is created as a private knowledge object, then only the user who created it can see the extracted field in their search results. A search mode is a setting that determines how Splunk processes and displays the search results, such as Fast, Smart, or Verbose. Fast mode is the fastest and most efficient search mode, but it also limits the number of fields and events that are displayed. Fast mode only shows the default fields, such as _time, host, source, sourcetype, and _raw, and any fields that are explicitly used in the search. If a field is not used in the search and it is not a default field, then it will not be shown in Fast mode. The events are tagged as communicate, but are missing the network tag, and the Typing Queue, which does regular expression replacements, is blocked, are not valid explanations for the problem. Tags are labels that can be applied to fields or field values to make them easier to search. Tags do not affect the visibility of fields, unless they are used as filters in the search. The Typing Queue is a component of the Splunk data pipeline that performs regular expression replacements on the data, such as replacing IP addresses with host names.The Typing Queue does not affect the field extraction process, unless it is configured to do so

Which two sections can be expanded using the Search Job Inspector?

A.

Execution costs.

A.

Execution costs.

Answers
B.

Saved search history.

B.

Saved search history.

Answers
C.

Search job properties.

C.

Search job properties.

Answers
D.

Optimization suggestions.

D.

Optimization suggestions.

Answers
Suggested answer: C, D

Explanation:

The Search Job Inspector can be used to expand the following sections: Search job properties and Optimization suggestions. The Search Job Inspector is a tool that provides detailed information about a search job, such as the search parameters, the search statistics, the search timeline, and the search log. The Search Job Inspector can be accessed by clicking the Job menu in the Search bar and selecting Inspect Job. The Search Job Inspector has several sections that can be expanded or collapsed by clicking the arrow icon next to the section name. The Search job properties section shows the basic information about the search job, such as the SID, the status, the duration, the disk usage, and the scan count. The Optimization suggestions section shows the suggestions for improving the search performance, such as using transforming commands, filtering events, or reducing fields. The Execution costs and Saved search history sections are not part of the Search Job Inspector, and they cannot be expanded. The Execution costs section is part of the Search Dashboard, which shows the relative costs of each search component, such as commands, lookups, or subsearches.The Saved search history section is part of the Saved Searches page, which shows the history of the saved searches that have been run by the user or by a schedule

What is the default log size for Splunk internal logs?

A.

10MB

A.

10MB

Answers
B.

20 MB

B.

20 MB

Answers
C.

25MB

C.

25MB

Answers
D.

30MB

D.

30MB

Answers
Suggested answer: C

Explanation:

Splunk internal logs are stored in the SPLUNK_HOME/var/log/splunk directory by default. The default log size for Splunk internal logs is 25 MB, which means that when a log file reaches 25 MB, Splunk rolls it to a backup file and creates a new log file.The default number of backup files is 5, which means that Splunk keeps up to 5 backup files for each log file

What is a Splunk Job? (Select all that apply.)

A.

A user-defined Splunk capability.

A.

A user-defined Splunk capability.

Answers
B.

Searches that are subjected to some usage quota.

B.

Searches that are subjected to some usage quota.

Answers
C.

A search process kicked off via a report or an alert.

C.

A search process kicked off via a report or an alert.

Answers
D.

A child OS process manifested from the splunkd process.

D.

A child OS process manifested from the splunkd process.

Answers
Suggested answer: B, C, D

Explanation:

A Splunk job is a search process that is kicked off via a report, an alert, or a user action. A Splunk job is a child OS process manifested from the splunkd process, which is the main Splunk daemon. A Splunk job is subjected to some usage quota, such as memory, CPU, and disk space, which can be configured in the limits.conf file. A Splunk job is not a user-defined Splunk capability, as it is a core feature of the Splunk platform.

When Splunk is installed, where are the internal indexes stored by default?

A.

SPLUNK_HOME/bin

A.

SPLUNK_HOME/bin

Answers
B.

SPLUNK_HOME/var/lib

B.

SPLUNK_HOME/var/lib

Answers
C.

SPLUNK_HOME/var/run

C.

SPLUNK_HOME/var/run

Answers
D.

SPLUNK_HOME/etc/system/default

D.

SPLUNK_HOME/etc/system/default

Answers
Suggested answer: B

Explanation:

Splunk internal indexes are the indexes that store Splunk's own data, such as internal logs, metrics, audit events, and configuration snapshots. By default, Splunk internal indexes are stored in the SPLUNK_HOME/var/lib/splunk directory, along with other user-defined indexes. The SPLUNK_HOME/bin directory contains the Splunk executable files and scripts. The SPLUNK_HOME/var/run directory contains the Splunk process ID files and lock files. The SPLUNK_HOME/etc/system/default directory contains the default Splunk configuration files.

Which of the following options can improve reliability of syslog delivery to Splunk? (Select all that apply.)

A.

Use TCP syslog.

A.

Use TCP syslog.

Answers
B.

Configure UDP inputs on each Splunk indexer to receive data directly.

B.

Configure UDP inputs on each Splunk indexer to receive data directly.

Answers
C.

Use a network load balancer to direct syslog traffic to active backend syslog listeners.

C.

Use a network load balancer to direct syslog traffic to active backend syslog listeners.

Answers
D.

Use one or more syslog servers to persist data with a Universal Forwarder to send the data to Splunk indexers.

D.

Use one or more syslog servers to persist data with a Universal Forwarder to send the data to Splunk indexers.

Answers
Suggested answer: A, D

Explanation:

Syslog is a standard protocol for sending log messages from various devices and applications to a central server. Syslog can use either UDP or TCP as the transport layer protocol. UDP is faster but less reliable, as it does not guarantee delivery or order of the messages. TCP is slower but more reliable, as it ensures delivery and order of the messages. Therefore, to improve the reliability of syslog delivery to Splunk, it is recommended to use TCP syslog.

Another option to improve the reliability of syslog delivery to Splunk is to use one or more syslog servers to persist data with a Universal Forwarder to send the data to Splunk indexers. This way, the syslog servers can act as a buffer and store the data in case of network or Splunk outages. The Universal Forwarder can then forward the data to Splunk indexers when they are available.

Using a network load balancer to direct syslog traffic to active backend syslog listeners is not a reliable option, as it does not address the possibility of data loss or duplication due to network failures or Splunk outages. Configuring UDP inputs on each Splunk indexer to receive data directly is also not a reliable option, as it exposes the indexers to the network and increases the risk of data loss or duplication due to UDP limitations.

What is the logical first step when starting a deployment plan?

A.

Inventory the currently deployed logging infrastructure.

A.

Inventory the currently deployed logging infrastructure.

Answers
B.

Determine what apps and use cases will be implemented.

B.

Determine what apps and use cases will be implemented.

Answers
C.

Gather statistics on the expected adoption of Splunk for sizing.

C.

Gather statistics on the expected adoption of Splunk for sizing.

Answers
D.

Collect the initial requirements for the deployment from all stakeholders.

D.

Collect the initial requirements for the deployment from all stakeholders.

Answers
Suggested answer: D

Explanation:

The logical first step when starting a deployment plan is to collect the initial requirements for the deployment from all stakeholders. This includes identifying the business objectives, the data sources, the use cases, the security and compliance needs, the scalability and availability expectations, and the budget and timeline constraints. Collecting the initial requirements helps to define the scope and the goals of the deployment, and to align the expectations of all the parties involved.

Inventorying the currently deployed logging infrastructure, determining what apps and use cases will be implemented, and gathering statistics on the expected adoption of Splunk for sizing are all important steps in the deployment planning process, but they are not the logical first step. These steps can be done after collecting the initial requirements, as they depend on the information gathered from the stakeholders.

Total 160 questions
Go to page: of 16