ExamGecko
Home Home / Splunk / SPLK-1005

Splunk SPLK-1005 Practice Test - Questions Answers, Page 6

Question list
Search
Search

Which monitor statement will retrieve only files that start with 'access' in the directory /opt/log/ww2/?

A.

[monitor:///opt/lug/.../access]

A.

[monitor:///opt/lug/.../access]

Answers
B.

[monitor:///opt/log/www2/access*]

B.

[monitor:///opt/log/www2/access*]

Answers
C.

[monitor:///opt/log/www2/]

C.

[monitor:///opt/log/www2/]

Answers
D.

[monitor:///opt/log/.../]

D.

[monitor:///opt/log/.../]

Answers
Suggested answer: B

Explanation:

The correct monitor statement to retrieve only files that start with 'access' in the directory /opt/log/www2/ is [monitor:///opt/log/www2/access*]. This configuration specifically targets files that begin with the name 'access' and will match any such files within that directory, such as 'access.log'.

Splunk Documentation

Reference: Monitor files and directories

Li was asked to create a Splunk configuration to monitor syslog files stored on Linux servers at their organization. This configuration will be pushed out to multiple systems via a Splunk app using the on-prem deployment server.

The system administrators have provided Li with a directory listing for the logging locations on three syslog hosts, which are representative of the file structure for all systems collecting this data. An example from each system is shown below:

A)

B)

C)

D)

A.

Option A

A.

Option A

Answers
B.

Option B

B.

Option B

Answers
C.

Option C

C.

Option C

Answers
D.

Option D

D.

Option D

Answers
Suggested answer: A

Explanation:

The correct monitor statement that will capture all variations of the syslog file paths across different systems is [monitor:///var/log/network/syslog*/linux_secure/*].

This configuration works because:

syslog* matches directories that start with 'syslog' (like syslog01, syslog02, etc.).

The wildcard * after linux_secure/ will capture all files within that directory, including different filenames like syslog.log and syslog.log.2020090801.

This setup will ensure that all the necessary files from the different syslog hosts are monitored.

Splunk Documentation

Reference: Monitor files and directories

By default, which of the following capabilities are granted to the sc_admin role?

A.

indexes_edit, edit___token, admin_all_objects, delete_by_keyword

A.

indexes_edit, edit___token, admin_all_objects, delete_by_keyword

Answers
B.

indexes_edit, fsh_manage, acs_conf, list_indexesdiscovert

B.

indexes_edit, fsh_manage, acs_conf, list_indexesdiscovert

Answers
C.

indexes_edit, fsh_manage, admin_all_objects can_delete

C.

indexes_edit, fsh_manage, admin_all_objects can_delete

Answers
D.

indexes_edit, edit_token_http, admin _all objects, edit limits_conf

D.

indexes_edit, edit_token_http, admin _all objects, edit limits_conf

Answers
Suggested answer: C

Explanation:

By default, the sc_admin role in Splunk Cloud is granted several important capabilities, including:

indexes_edit: The ability to create, edit, and manage indexes.

fsh_manage: Manage full-stack monitoring integrations.

admin_all_objects: Full administrative control over all objects in Splunk.

can_delete: The ability to delete events using the delete command.

Option C correctly lists these default capabilities for the sc_admin role.

Splunk Documentation

Reference: User roles and capabilities

Where does the regex replacement processor run?

A.

Merging pipeline

A.

Merging pipeline

Answers
B.

Typing pipeline

B.

Typing pipeline

Answers
C.

Index pipeline

C.

Index pipeline

Answers
D.

Parsing pipeline

D.

Parsing pipeline

Answers
Suggested answer: D

Explanation:

The regex replacement processor is part of the parsing stage in Splunk's data ingestion pipeline. This stage is responsible for handling data transformations, which include applying regex replacements.

D . Parsing pipeline is the correct answer. The parsing pipeline is where initial data transformations, including regex replacement, occur before the data is indexed. This stage processes events as they are parsed from raw data, including applying any regex-based modifications.

Splunk Documentation

Reference:

Data Processing Pipelines in Splunk

What is the correct syntax to monitor /apache/too/logo, /apache/bor/logs, and /apache/bar/l/logo?

A)

B)

C)

D)

A.

Option A

A.

Option A

Answers
B.

Option B

B.

Option B

Answers
C.

Option C

C.

Option C

Answers
D.

Option D

D.

Option D

Answers
Suggested answer: B

Explanation:

In the context of Splunk, when configuring data inputs to monitor specific directories, the correct syntax must match the directory paths accurately and adhere to the format recognized by Splunk.

Option A: [monitor:///apache/*/logs] - This syntax would attempt to monitor all directories under /apache/ that contain the word logs, which is not what the question is asking. It is incorrect for the paths given in the question.

Option B: [monitor:///apache/foo/logs, /apache/bar/logs, /apache/bar/1/logs] - This syntax correctly lists the specific paths /apache/foo/logs, /apache/bar/logs, and /apache/bar/1/logs separately. This is the correct answer as it precisely matches the paths given in the question.

Option C: [monitor:///apache/.../logs] - The triple dots syntax (...) is used to match any subdirectories under /apache/. This would monitor all logs directories within any subdirectory structure under /apache/, which again, does not specifically match the paths given in the question.

Option D: [monitor:///apache/foo/logs, /apache/bar/logs, and /apache/bar/1/logs] - This syntax includes the word 'and', which is not valid in the Splunk monitor stanza. The syntax should list the paths separated by commas, without additional words.

Thus, Option B is the correct syntax to monitor the specified paths in Splunk.

For additional reference, you can check the official Splunk documentation on monitoring inputs which provides guidelines on how to configure monitoring of files and directories.

In Splunk terminology, what is an index?

A.

A data repository that contains raw, compressed data along with psidx files.

A.

A data repository that contains raw, compressed data along with psidx files.

Answers
B.

A data repository that contains raw, compressed data along with tsidx files.

B.

A data repository that contains raw, compressed data along with tsidx files.

Answers
C.

A data repository that contains raw, uncompressed data along with psidx files.

C.

A data repository that contains raw, uncompressed data along with psidx files.

Answers
D.

A data repository that contains raw, uncompressed data along with tsidx files.

D.

A data repository that contains raw, uncompressed data along with tsidx files.

Answers
Suggested answer: B

Explanation:

In Splunk, an index is a data repository that stores both raw data and associated indexing information. Specifically, the raw data is stored in a compressed format, and the indexing information is stored in tsidx files (time series index files). These tsidx files enable fast searching and retrieval of data based on time. The correct terminology and structure make option B accurate.

Splunk Documentation

Reference: Splunk Indexes

When adding a directory monitor and specifying a sourcetype explicitly, it applies to all files in the directory and subdirectories. If automatic sourcetyping is used, a user can selectively override it in which file on the forwarder?

A.

transforms.conf

A.

transforms.conf

Answers
B.

props.conf

B.

props.conf

Answers
C.

inputs.conf

C.

inputs.conf

Answers
D.

outputs.cont

D.

outputs.cont

Answers
Suggested answer: B

Explanation:

When a directory monitor is set up with automatic sourcetyping, a user can selectively override the sourcetype assignment by configuring the props.conf file on the forwarder. The props.conf file allows you to define how data should be parsed and processed, including assigning or overriding sourcetypes for specific data inputs.

Splunk Documentation

Reference: props.conf configuration

Which of the following methods is valid for creating index-time field extractions?

A.

Use the UI to create a sourcetype, specify the field name and corresponding regular expression with capture statement.

A.

Use the UI to create a sourcetype, specify the field name and corresponding regular expression with capture statement.

Answers
B.

Create a configuration app with the index-time props.conf and/or transfoms. conf, and upload the app via UI.

B.

Create a configuration app with the index-time props.conf and/or transfoms. conf, and upload the app via UI.

Answers
C.

Use the CU app to define settings in fields.conf, and restart Splunk Cloud.

C.

Use the CU app to define settings in fields.conf, and restart Splunk Cloud.

Answers
D.

Use the rex command to extract the desired field, and then save as a calculated field.

D.

Use the rex command to extract the desired field, and then save as a calculated field.

Answers
Suggested answer: B

Explanation:

The valid method for creating index-time field extractions is to create a configuration app that includes the necessary props.conf and/or transforms.conf configurations. This app can then be uploaded via the UI. Index-time field extractions must be defined in these configuration files to ensure that fields are extracted correctly during indexing.

Splunk Documentation

Reference: Index-time field extractions

Which of the following is the default bandwidth limit in the Splunk Universal Forwarder credentials package?

A.

0KBps

A.

0KBps

Answers
B.

256 KBps

B.

256 KBps

Answers
C.

512 KBps

C.

512 KBps

Answers
D.

1024 KBps

D.

1024 KBps

Answers
Suggested answer: B

Explanation:

The default bandwidth limit in the Splunk Universal Forwarder is set to 256 KBps. This setting is in place to prevent the forwarder from overwhelming network resources, and it can be adjusted as necessary based on the deployment's specific needs.

Splunk Documentation

Reference: Universal Forwarder Configuration

A customer wants to mask unstructured data before sending it to Splunk Cloud. Where should SEBCMD be configured for this?

A.

props. conf on a Splunk Cloud search head,

A.

props. conf on a Splunk Cloud search head,

Answers
B.

props.conf on a Heavy Forwarder.

B.

props.conf on a Heavy Forwarder.

Answers
C.

transforms, cent on a Splunk Cloud indexer.

C.

transforms, cent on a Splunk Cloud indexer.

Answers
D.

props. conf- on a Universal Forwarder.

D.

props. conf- on a Universal Forwarder.

Answers
Suggested answer: B

Explanation:

To mask unstructured data before sending it to Splunk Cloud, the SEDCMD should be configured in the props.conf file on a Heavy Forwarder. The Heavy Forwarder is responsible for data parsing and transformation before forwarding the data to Splunk Cloud. This ensures that sensitive data is masked before it reaches the indexing stage.

Splunk Documentation

Reference: Using SEDCMD to Mask Data


Total 80 questions
Go to page: of 8