ExamGecko
Home Home / Splunk / SPLK-1003

Splunk SPLK-1003 Practice Test - Questions Answers, Page 9

Question list
Search
Search

Which option accurately describes the purpose of the HTTP Event Collector (HEC)?

A.
A token-based HTTP input that is secure and scalable and that requires the use of forwarders
A.
A token-based HTTP input that is secure and scalable and that requires the use of forwarders
Answers
B.
A token-based HTTP input that is secure and scalable and that does not require the use of forwarders.
B.
A token-based HTTP input that is secure and scalable and that does not require the use of forwarders.
Answers
C.
An agent-based HTTP input that is secure and scalable and that does not require the use of forwarders.
C.
An agent-based HTTP input that is secure and scalable and that does not require the use of forwarders.
Answers
D.
A token-based HTTP input that is insecure and non-scalable and that does not require the use of forwarders.
D.
A token-based HTTP input that is insecure and non-scalable and that does not require the use of forwarders.
Answers
Suggested answer: B

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector

"The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model.

You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events."

How is a remote monitor input distributed to forwarders?

A.
As an app.
A.
As an app.
Answers
B.
As a forward.conf file.
B.
As a forward.conf file.
Answers
C.
As a monitor.conf file.
C.
As a monitor.conf file.
Answers
D.
As a forwarder monitor profile.
D.
As a forwarder monitor profile.
Answers
Suggested answer: A

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents

Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants

Reference: https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents

How is data handled by Splunk during the input phase of the data ingestion process?

A.
Data is treated as streams.
A.
Data is treated as streams.
Answers
B.
Data is broken up into events.
B.
Data is broken up into events.
Answers
C.
Data is initially written to disk.
C.
Data is initially written to disk.
Answers
D.
Data is measured by the license meter.
D.
Data is measured by the license meter.
Answers
Suggested answer: A

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline

"In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys."

Reference: https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline

Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?

A.
Upload option
A.
Upload option
Answers
B.
Forward option
B.
Forward option
Answers
C.
Monitor option
C.
Monitor option
Answers
D.
Download option
D.
Download option
Answers
Suggested answer: A

An organization wants to collect Windows performance data from a set of clients, however, installing Splunk software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?

A.
Use Local Windows host monitoring.
A.
Use Local Windows host monitoring.
Answers
B.
Use Windows Remote Inputs with WMI.
B.
Use Windows Remote Inputs with WMI.
Answers
C.
Use Local Windows network monitoring.
C.
Use Local Windows network monitoring.
Answers
D.
Use an index with an Index Data Type of Metrics.
D.
Use an index with an Index Data Type of Metrics.
Answers
Suggested answer: B

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/ConsiderationsfordecidinghowtomonitorWindowsdata

"The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data."

Which of the following must be done to define user permissions when integrating Splunk with LDAP?

A.
Map Users
A.
Map Users
Answers
B.
Map Groups
B.
Map Groups
Answers
C.
Map LDAP Inheritance
C.
Map LDAP Inheritance
Answers
D.
Map LDAP to Active Directory
D.
Map LDAP to Active Directory
Answers
Suggested answer: B

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb

"You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of." "If your LDAP environment does not have group entries, you can treat each user as its own group."

Reference:

https://docs.splunk.com/Documentation/Splunk/8.0.5/Security/ConfigureLDAPwithSplunkWeb

In which phase do indexed extractions in props.conf occur?

A.
Inputs phase
A.
Inputs phase
Answers
B.
Parsing phase
B.
Parsing phase
Answers
C.
Indexing phase
C.
Indexing phase
Answers
D.
Searching phase
D.
Searching phase
Answers
Suggested answer: B

Explanation:

The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).

Input phase

inputs.conf

props.conf

CHARSET

NO_BINARY_CHECK

CHECK_METHOD

CHECK_FOR_HEADER (deprecated)

PREFIX_SOURCETYPE

sourcetype

wmi.conf

regmon-filters.conf

Structured parsing phase

props.conf

INDEXED_EXTRACTIONS, and all other structured data header extractions

Parsing phase

props.conf

LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line

merging settings

TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction

settings and rules

TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event

routing

SEDCMD

MORE_THAN, LESS_THAN

transforms.conf

stanzas referenced by a TRANSFORMS clause in props.conf

LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH

Reference: https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Configurationparametersandthedatapipeline

Which of the following accurately describes HTTP Event Collector indexer acknowledgement?

A.
It requires a separate channel provided by the client.
A.
It requires a separate channel provided by the client.
Answers
B.
It is configured the same as indexer acknowledgement used to protect in-flight data.
B.
It is configured the same as indexer acknowledgement used to protect in-flight data.
Answers
C.
It can be enabled at the global setting level.
C.
It can be enabled at the global setting level.
Answers
D.
It stores status information on the Splunk server.
D.
It stores status information on the Splunk server.
Answers
Suggested answer: A

Explanation:

https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck

- Section: About channels and sending data

Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, "Data channel is missing." Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request

What action is required to enable forwarder management in Splunk Web?

A.
Navigate to Settings > Server Settings > General Settings, and set an App server port.
A.
Navigate to Settings > Server Settings > General Settings, and set an App server port.
Answers
B.
Navigate to Settings > Forwarding and receiving, and click on Enable Forwarding.
B.
Navigate to Settings > Forwarding and receiving, and click on Enable Forwarding.
Answers
C.
Create a server class and map it to a client in SPLUNK_HOME/etc/system/local/serverclass.conf.
C.
Create a server class and map it to a client in SPLUNK_HOME/etc/system/local/serverclass.conf.
Answers
D.
Place an app in the SPLUNK_HOME/etc/deployment-apps directory of the deployment server.
D.
Place an app in the SPLUNK_HOME/etc/deployment-apps directory of the deployment server.
Answers
Suggested answer: C

Explanation:

Reference:

https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Forwardermanagementoverview

https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver

"To activate deployment server, you must place at least one app into%SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the "send to indexer" app you created earlier, and the host is the indexer you set up initially.

Which of the following is accurate regarding the input phase?

A.
Breaks data into events with timestamps.
A.
Breaks data into events with timestamps.
Answers
B.
Applies event-level transformations.
B.
Applies event-level transformations.
Answers
C.
Fine-tunes metadata.
C.
Fine-tunes metadata.
Answers
D.
Performs character encoding.
D.
Performs character encoding.
Answers
Suggested answer: D

Explanation:

https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline "The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules."

Total 185 questions
Go to page: of 19