ExamGecko
Home Home / Splunk / SPLK-1005

Splunk SPLK-1005 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

A monitor has been created in inputs. con: for a directory that contains a mix of file types.

How would a Cloud Admin fine-tune assigned sourcetypes for different files in the directory during the input phase?

A.

On the Indexer parsing the data, leave sourcetype as automatic for the directory monitor. Then create a props.conf that assigns a specific sourcetype by source stanza.

A.

On the Indexer parsing the data, leave sourcetype as automatic for the directory monitor. Then create a props.conf that assigns a specific sourcetype by source stanza.

Answers
B.

On the forwarder collecting the data, leave sourcetype as automatic for the directory monitor. Then create a props. conf that assigns a specific sourcetype by source stanza.

B.

On the forwarder collecting the data, leave sourcetype as automatic for the directory monitor. Then create a props. conf that assigns a specific sourcetype by source stanza.

Answers
C.

On the Indexer parsing the data, set multiple sourcetype_source attributes for the directory monitor collecting the files. Then create a props, com that filters out unwanted files.

C.

On the Indexer parsing the data, set multiple sourcetype_source attributes for the directory monitor collecting the files. Then create a props, com that filters out unwanted files.

Answers
D.

On the forwarder collecting the data, set multiple 3ourcotype_sourc attributes for the directory monitor collecting the files. Then create a props. conf that filters out unwanted files.

D.

On the forwarder collecting the data, set multiple 3ourcotype_sourc attributes for the directory monitor collecting the files. Then create a props. conf that filters out unwanted files.

Answers
Suggested answer: B

Explanation:

When dealing with a directory containing a mix of file types, it's essential to fine-tune the sourcetypes for different files to ensure accurate data parsing and indexing.

B . On the forwarder collecting the data, leave sourcetype as automatic for the directory monitor. Then create a props.conf that assigns a specific sourcetype by source stanza: This is the correct answer. In this approach, the Universal Forwarder is set up with a directory monitor where the sourcetype is initially left as automatic. Then, a props.conf file is configured to specify different sourcetypes based on the source (filename or path). This ensures that as the data is collected, it is appropriately categorized by sourcetype according to the file type.

Splunk Documentation

Reference:

Configuring Inputs and Sourcetypes

Fine-tuning sourcetypes

Windows Input types are collected in Splunk via a script which is configurable using the GUI. What is this type of input called?

A.

Batch

A.

Batch

Answers
B.

Scripted

B.

Scripted

Answers
C.

Modular

C.

Modular

Answers
D.

Front-end

D.

Front-end

Answers
Suggested answer: C

Explanation:

Windows inputs in Splunk, particularly those that involve more advanced data collection capabilities beyond simple file monitoring, can utilize scripts or custom inputs. These are typically referred to as Modular Inputs.

C . Modular: This is the correct answer. Modular Inputs are designed to be configurable via the Splunk Web UI and can collect data using custom or predefined scripts, handling more complex data collection tasks. This is the type of input that is used for collecting Windows-specific data such as Event Logs, Performance Monitoring, and other similar inputs.

Splunk Documentation

Reference:

Modular Inputs

Windows Data Collection

The following sample log event shows evidence of credit card numbers being present in the transactions. loc file.

Which of these SEDCM3 settings will mask this and other suspected credit card numbers with an Y character for each character being masked? The indexed event should be formatted as follows:

A)

B)

C)

D)

A.

Option A

A.

Option A

Answers
B.

Option B

B.

Option B

Answers
C.

Option C

C.

Option C

Answers
D.

Option D

D.

Option D

Answers
Suggested answer: A

Explanation:

The correct SEDCMD setting to mask the credit card numbers, ensuring that the masked version replaces each digit with an 'x' character, is Option A.

The SEDCMD syntax works as follows:

s/ starts the substitute command.

(?cc_num=\d{7})\d{9}/ matches the specific pattern of the credit card number in the logs.

\1xxxxxxxxx replaces the matched portion with the first captured group (the first 7 digits of the cc_num), followed by 9 'x' characters to mask the remaining digits.

/g ensures that the substitution is applied globally, throughout the string.

Thus, Option A correctly implements this requirement.

Splunk Documentation

Reference: SEDCMD for Masking Data

Which of the following is a correct statement about Universal Forwarders?

A.

The Universal Forwarder must be able to contact the license master.

A.

The Universal Forwarder must be able to contact the license master.

Answers
B.

A Universal Forwarder must connect to Splunk Cloud via a Heavy Forwarder.

B.

A Universal Forwarder must connect to Splunk Cloud via a Heavy Forwarder.

Answers
C.

A Universal Forwarder can be an Intermediate Forwarder.

C.

A Universal Forwarder can be an Intermediate Forwarder.

Answers
D.

The default output bandwidth is 500KBps.

D.

The default output bandwidth is 500KBps.

Answers
Suggested answer: C

Explanation:

A Universal Forwarder (UF) can indeed be configured as an Intermediate Forwarder. This means that the UF can receive data from other forwarders and then forward that data on to indexers or Splunk Cloud, effectively acting as a relay point in the data forwarding chain.

Option A is incorrect because a Universal Forwarder does not need to contact the license master; only indexers and search heads require this.

Option B is incorrect as Universal Forwarders can connect directly to Splunk Cloud or via other forwarders.

Option D is also incorrect because the default output bandwidth limit for a UF is typically much higher than 500KBps (default is 256KBps per pipeline, but can be configured).

Splunk Documentation

Reference: Universal Forwarder

Which of the following is true when integrating LDAP authentication?

A.

Splunk stores LDAP end user names and passwords on search heads.

A.

Splunk stores LDAP end user names and passwords on search heads.

Answers
B.

The mapping of LDAP groups to Splunk roles happens automatically.

B.

The mapping of LDAP groups to Splunk roles happens automatically.

Answers
C.

Splunk Cloud only supports Active Directory LDAP servers.

C.

Splunk Cloud only supports Active Directory LDAP servers.

Answers
D.

New user data is cached the first time a user logs in.

D.

New user data is cached the first time a user logs in.

Answers
Suggested answer: D

Explanation:

When integrating LDAP authentication with Splunk, new user data is cached the first time a user logs in. This means that Splunk does not store LDAP usernames and passwords; instead, it relies on the LDAP server for authentication. The mapping of LDAP groups to Splunk roles must be configured manually; it does not happen automatically. Additionally, Splunk Cloud supports various LDAP servers, not just Active Directory.

Splunk Documentation

Reference: LDAP Authentication

A Splunk Cloud administrator is looking to allow a new group of Splunk users in the marketing department to access the Splunk environment and view a dashboard with relevant data. These users need to access marketing data (stored in the marketing_data index), but shouldn't be able to access other data, such as events related to security or operations.

Which approach would be the best way to accomplish these requirements?

A.

Create a new user with access to the marketing_data index assigned.

A.

Create a new user with access to the marketing_data index assigned.

Answers
B.

Create a new role that inherits the user role and remove the capability to search indexes other than marketing_data.

B.

Create a new role that inherits the user role and remove the capability to search indexes other than marketing_data.

Answers
C.

Create a new role that inherits the admin rote and assign access to the marketing_dat.a index.

C.

Create a new role that inherits the admin rote and assign access to the marketing_dat.a index.

Answers
D.

Create a new role that does not inherit from any other role, turn on the same capabilities as the user role, and assign access to the marketing_data index.

D.

Create a new role that does not inherit from any other role, turn on the same capabilities as the user role, and assign access to the marketing_data index.

Answers
Suggested answer: B

Explanation:

The best approach to meet the requirements of the marketing department is to create a new role that inherits the user role but with restricted access to only the marketing_data index. This setup allows users to perform searches and view dashboards while ensuring they cannot access other indexes such as those containing security or operations data.

Splunk Documentation

Reference: Splunk Role-based Access Control

Files from multiple systems are being stored on a centralized log server. The files are organized into directories based on the original server they came from. Which of the following is a recommended approach for correctly setting the host values based on their origin?

A.

Use the host segment, setting.

A.

Use the host segment, setting.

Answers
B.

Set host = * in the monitor stanza.

B.

Set host = * in the monitor stanza.

Answers
C.

The host value cannot be dynamically set.

C.

The host value cannot be dynamically set.

Answers
D.

Manually create a separate monitor stanza for each host, with the nose = value set.

D.

Manually create a separate monitor stanza for each host, with the nose = value set.

Answers
Suggested answer: A

Explanation:

The recommended approach for setting the host values based on their origin when files from multiple systems are stored on a centralized log server is to use the host_segment setting. This setting allows you to dynamically set the host value based on a specific segment of the file path, which can be particularly useful when organizing logs from different servers into directories.

Splunk Documentation

Reference: Inputs.conf - host_segment

In which file can the SH0ULD_LINEMERCE setting be modified?

A.

transforms.conf

A.

transforms.conf

Answers
B.

inputs.conf

B.

inputs.conf

Answers
C.

props.conf

C.

props.conf

Answers
D.

outputs.conf

D.

outputs.conf

Answers
Suggested answer: C

Explanation:

The SHOULD_LINEMERGE setting is used in Splunk to control whether or not multiple lines of an event should be combined into a single event. This setting is configured in the props.conf file, where Splunk handles data parsing and field extraction. Setting SHOULD_LINEMERGE = true merges lines together based on specific rules.

Splunk Documentation

Reference: props.conf - SHOULD_LINEMERGE

What is the recommended approach to collect data from network devices?

A.

TCP/UDP Feed > Heavy Forwarder > Intermediate Forwarder > Splunk Cloud

A.

TCP/UDP Feed > Heavy Forwarder > Intermediate Forwarder > Splunk Cloud

Answers
B.

TCP/UDP Feed > Syslog Server with Universal Forwarder > Splunk Cloud

B.

TCP/UDP Feed > Syslog Server with Universal Forwarder > Splunk Cloud

Answers
C.

TCP/UDP Feed > Universal Forwarder > Intermediate Forwarder > Splunk Cloud

C.

TCP/UDP Feed > Universal Forwarder > Intermediate Forwarder > Splunk Cloud

Answers
D.

TCP/UDP Feed > Intermediate Forwarder > Heavy Forwarder > Splunk Cloud

D.

TCP/UDP Feed > Intermediate Forwarder > Heavy Forwarder > Splunk Cloud

Answers
Suggested answer: B

Explanation:

The recommended approach to collect data from network devices is to use a Syslog server with a Universal Forwarder (UF) installed. The network devices send data to the Syslog server, which then forwards the data to Splunk Cloud using the Universal Forwarder. This method ensures reliable data ingestion and processing while maintaining flexibility in handling different types of network device data.

Splunk Documentation

Reference: Best practices for getting data in

When a forwarder phones home to a Deployment Server it compares the check-sum value of the forwarder's app to the Deployment Server's app. What happens to the app If the check-sum values do not match?

A.

The app on the forwarder is always deleted and re-downloaded from the Deployment Server.

A.

The app on the forwarder is always deleted and re-downloaded from the Deployment Server.

Answers
B.

The app on the forwarder is only deleted and re-downloaded from the Deployment Server if the forwarder's app has a smaller check-sum value.

B.

The app on the forwarder is only deleted and re-downloaded from the Deployment Server if the forwarder's app has a smaller check-sum value.

Answers
C.

The app is downloaded from the Deployment Server and the changes are merged.

C.

The app is downloaded from the Deployment Server and the changes are merged.

Answers
D.

A warning is generated on the Deployment Server stating the apps are out of sync. An Admin will need to confirm which version of the app should be used.

D.

A warning is generated on the Deployment Server stating the apps are out of sync. An Admin will need to confirm which version of the app should be used.

Answers
Suggested answer: A

Explanation:

When a forwarder phones home to a Deployment Server, it compares the checksum of its apps with those on the Deployment Server. If the checksums do not match, the app on the forwarder is always deleted and re-downloaded from the Deployment Server. This ensures that the forwarder has the most current and correct version of the app as dictated by the Deployment Server.

Splunk Documentation

Reference: Deployment Server Overview

Total 80 questions
Go to page: of 8