ExamGecko
Home Home / Splunk / SPLK-3002

Splunk SPLK-3002 Practice Test - Questions Answers, Page 4

Question list
Search
Search

Which of the following is an advantage of using adaptive time thresholds?

A.
Automatically update thresholds daily to manage dynamic changes to KPI values.
A.
Automatically update thresholds daily to manage dynamic changes to KPI values.
Answers
B.
Automatically adjust KPI calculation to manage dynamic event data.
B.
Automatically adjust KPI calculation to manage dynamic event data.
Answers
C.
Automatically adjust aggregation policy grouping to manage escalating severity.
C.
Automatically adjust aggregation policy grouping to manage escalating severity.
Answers
D.
Automatically adjust correlation search thresholds to adjust sensitivity over time.
D.
Automatically adjust correlation search thresholds to adjust sensitivity over time.
Answers
Suggested answer: A

Explanation:

Adaptive thresholds are thresholds calculated by machine learning algorithms that dynamically adapt and change based on the KPI's observed behavior. Adaptive thresholds are useful for monitoring KPIs that have unpredictable or seasonal patterns that are difficult to capture with static thresholds. For example, you might use adaptive thresholds for a KPI that measures web traffic volume, which can vary depending on factors such as holidays, promotions, events, and so on. The advantage of using adaptive thresholds is:

A) Automatically update thresholds daily to manage dynamic changes to KPI values. This is true because adaptive thresholds use historical data from a training window to generate threshold values for each time block in a threshold template. Each night at midnight, ITSI recalculates adaptive threshold values for a KPI by organizing the data from the training window into distinct buckets and then analyzing each bucket separately. This way, the thresholds reflect the most recent changes in the KPI data and account for any anomalies or trends.

The other options are not advantages of using adaptive thresholds because:

B) Automatically adjust KPI calculation to manage dynamic event data. This is not true because adaptive thresholds do not affect the KPI calculation, which is based on the base search and the aggregation method. Adaptive thresholds only affect the threshold values that are used to determine the KPI severity level.

C) Automatically adjust aggregation policy grouping to manage escalating severity. This is not true because adaptive thresholds do not affect the aggregation policy, which is a set of rules that determines how to group notable events into episodes. Adaptive thresholds only affect the threshold values that are used to generate notable events based on KPI severity level.

D) Automatically adjust correlation search thresholds to adjust sensitivity over time. This is not true because adaptive thresholds do not affect the correlation search, which is a search that looks for relationships between data points and generates notable events. Adaptive thresholds only affect the threshold values that are used by KPIs, which can be used as inputs for correlation searches.

Which of the following applies when configuring time policies for KPI thresholds?

A.
A person can only configure 24 policies, one for each hour of the day.
A.
A person can only configure 24 policies, one for each hour of the day.
Answers
B.
They are great if you expect normal behavior at 1:00 to be different than normal behavior at 5:00
B.
They are great if you expect normal behavior at 1:00 to be different than normal behavior at 5:00
Answers
C.
If a person expects a KPI to change significantly through a cycle on a daily basis, don't use it.
C.
If a person expects a KPI to change significantly through a cycle on a daily basis, don't use it.
Answers
D.
It is possible for multiple time policies to overlap.
D.
It is possible for multiple time policies to overlap.
Answers
Suggested answer: B

Explanation:

Time policies are user-defined threshold values to be used at different times of the day or week to account for changing KPI workloads. Time policies accommodate normal variations in usage across your services and improve the accuracy of KPI and service health scores. For example, if your organization's peak activity is during the standard work week, you might create a KPI threshold time policy that accounts for higher levels of usage during work hours, and lower levels of usage during off-hours and weekends. The statement that applies when configuring time policies for KPI thresholds is:

B) They are great if you expect normal behavior at 1:00 to be different than normal behavior at 5:00. This is true because time policies allow you to define different threshold values for different time blocks, such as AM/PM, work hours/off hours, weekdays/weekends, and so on. This way, you can account for the expected variations in your KPI data based on the time of day or week.

The other statements do not apply because:

A) A person can only configure 24 policies, one for each hour of the day. This is not true because you can configure more than 24 policies using different time block combinations, such as 3 hour block, 2 hour block, 1 hour block, and so on.

C) If a person expects a KPI to change significantly through a cycle on a daily basis, don't use it. This is not true because time policies are designed to handle KPIs that change significantly through a cycle on a daily basis, such as web traffic volume or CPU load percent.

D) It is possible for multiple time policies to overlap. This is not true because you can only have one active time policy at any given time. When you create a new time policy, the previous time policy is overwritten and cannot be recovered.

What is the main purpose of the service analyzer?

A.
Display a list of All Services and Entities.
A.
Display a list of All Services and Entities.
Answers
B.
Trigger external alerts based on threshold violations.
B.
Trigger external alerts based on threshold violations.
Answers
C.
Allow Analysts to add comments to Alerts.
C.
Allow Analysts to add comments to Alerts.
Answers
D.
Monitor overall Service and KPI status.
D.
Monitor overall Service and KPI status.
Answers
Suggested answer: D

Explanation:

The service analyzer is a dashboard that allows you to monitor the overall service and KPI status in ITSI. The service analyzer displays a list of all services and their health scores, which indicate how well each service is performing based on its KPIs. You can also view the status and values of each KPI within a service, as well as drill down into deep dives or glass tables for further analysis. The service analyzer helps you identify issues affecting your services and prioritize them based on their impact and urgency. The main purpose of the service analyzer is:

D) Monitor overall service and KPI status. This is true because the service analyzer provides a comprehensive view of the health and performance of your services and KPIs in real time.

The other options are not the main purpose of the service analyzer because:

A) Display a list of all services and entities. This is not true because the service analyzer does not display entities, which are IT components that require management to deliver an IT service. Entities are displayed in other dashboards, such as entity management or entity health overview.

B) Trigger external alerts based on threshold violations. This is not true because the service analyzer does not trigger alerts, which are notifications sent to external systems or users when certain conditions are met. Alerts are triggered by correlation searches or alert actions configured in ITSI.

C) Allow analysts to add comments to alerts. This is not true because the service analyzer does not allow analysts to add comments to alerts, which are notifications sent to external systems or users

What is the default importance value for dependent services' health scores?

A.
11
A.
11
Answers
B.
1
B.
1
Answers
C.
Unassigned
C.
Unassigned
Answers
D.
10
D.
10
Answers
Suggested answer: D

Explanation:

By default, impacting service health scores have an importance value of 11.

A service template is a predefined set of KPIs and entity rules that you can apply to a service or a group of services. A service template helps you standardize the configuration and monitoring of similar services across your IT environment. A service template can also include dependent services, which are services that are required for another service to function properly. For example, a web server service might depend on a database service and a network service. The default importance value for dependent services' health scores is:

D) 10. This is true because the importance value indicates how much a dependent service contributes to the health score of the parent service. The default value is 10, which means that the dependent service has the highest impact on the parent service's health score. You can change the importance value of a dependent service in the service template settings.

The other options are not correct because:

A) 11. This is not true because 11 is an invalid value for importance. The valid range is from 1 (lowest) to 10 (highest).

B) 1. This is not true because 1 is the lowest value for importance, not the default value. A value of 1 means that the dependent service has the lowest impact on the parent service's health score.

C) Unassigned. This is not true because every dependent service has an assigned importance value, which defaults to 10.

What should be considered when onboarding data into a Splunk index, assuming that ITSI will need to use this data?

A.
Use | stats functions in custom fields to prepare the data for KPI calculations.
A.
Use | stats functions in custom fields to prepare the data for KPI calculations.
Answers
B.
Check if the data could leverage pre-built KPIs from modules, then use the correct TA to onboard the data.
B.
Check if the data could leverage pre-built KPIs from modules, then use the correct TA to onboard the data.
Answers
C.
Make sure that all fields conform to CIM, then use the corresponding module to import related services.
C.
Make sure that all fields conform to CIM, then use the corresponding module to import related services.
Answers
D.
Plan to build as many data models as possible for ITSI to leverage
D.
Plan to build as many data models as possible for ITSI to leverage
Answers
Suggested answer: B

Explanation:

When onboarding data into a Splunk index, assuming that ITSI will need to use this data, you should consider the following:

B) Check if the data could leverage pre-built KPIs from modules, then use the correct TA to onboard the data. This is true because modules are pre-packaged sets of services, KPIs, and dashboards that are designed for specific types of data sources, such as operating systems, databases, web servers, and so on. Modules help you quickly set up and monitor your IT services using best practices and industry standards. To use modules, you need to install and configure the correct technical add-ons (TAs) that extract and normalize the data fields required by the modules.

The other options are not things you should consider because:

A) Use | stats functions in custom fields to prepare the data for KPI calculations. This is not true because using | stats functions in custom fields can cause performance issues and inaccurate results when calculating KPIs. You should use | stats functions only in base searches or ad hoc searches, not in custom fields.

C) Make sure that all fields conform to CIM, then use the corresponding module to import related services. This is not true because not all modules require CIM-compliant data sources. Some modules have their own data models and field extractions that are specific to their data sources. You should check the documentation of each module to see what data requirements and dependencies they have.

D) Plan to build as many data models as possible for ITSI to leverage. This is not true because building too many data models can cause performance issues and resource consumption in your Splunk environment. You should only build data models that are necessary and relevant for your ITSI use cases.

When changing a service template, which of the following will be added to linked services by default?

A.
Thresholds.
A.
Thresholds.
Answers
B.
Entity Rules.
B.
Entity Rules.
Answers
C.
New KPIs.
C.
New KPIs.
Answers
D.
Health score.
D.
Health score.
Answers
Suggested answer: C

Explanation:

C) New KPIs. This is true because when you add new KPIs to a service template, they will be automatically added to all the services that are linked to that template. This helps you keep your services consistent and up-to-date with the latest KPI definitions.

The other options will not be added to linked services by default because:

A) Thresholds. This is not true because when you change thresholds in a service template, they will not affect the existing thresholds in the linked services. You need to manually apply the threshold changes to each linked service if you want them to inherit the new thresholds from the template.

B) Entity rules. This is not true because when you change entity rules in a service template, they will not affect the existing entity rules in the linked services. You need to manually apply the entity rule changes to each linked service if you want them to inherit the new entity rules from the template.

D) Health score. This is not true because when you change health score settings in a service template, they will not affect the existing health score settings in the linked services. You need to manually apply the health score changes to each linked service if you want them to inherit the new health score settings from the template.

Which of the following items describe ITSI Deep Dive capabilities? (Choose all that apply.)

A.
Comparing a service's notable events over a time period.
A.
Comparing a service's notable events over a time period.
Answers
B.
Visualizing one or more Service KPIs values by time.
B.
Visualizing one or more Service KPIs values by time.
Answers
C.
Examining and comparing alert levels for KPIs in a service over time.
C.
Examining and comparing alert levels for KPIs in a service over time.
Answers
D.
Comparing swim lane values for a slice of time.
D.
Comparing swim lane values for a slice of time.
Answers
Suggested answer: B, C, D

Explanation:

A deep dive is a dashboard that allows you to analyze the historical trends and anomalies of your KPIs and metrics in ITSI. A deep dive displays a timeline of events and swim lanes of data that you can customize and filter to investigate issues and perform root cause analysis. Some of the capabilities of deep dives are:

B) Visualizing one or more service KPIs values by time. This is true because you can add KPI swim lanes to a deep dive to show the values and severity levels of one or more KPIs over time. You can also compare KPIs from different services or entities using service swapping or entity splitting.

C) Examining and comparing alert levels for KPIs in a service over time. This is true because you can add alert swim lanes to a deep dive to show the alert levels and counts for one or more KPIs over time. You can also drill down into the alert details and view the notable events associated with each alert.

D) Comparing swim lane values for a slice of time. This is true because you can use the time range selector to zoom in or out of a specific time range in a deep dive. You can also use the time brush to select a slice of time and compare the swim lane values for that time period.

The other option is not a capability of deep dives because:

A) Comparing a service's notable events over a time period. This is not true because deep dives do not display notable events, which are alerts generated by ITSI based on certain conditions or correlations. Notable events are displayed in other dashboards, such as episode review or glass tables.

What is an episode?

A.
A workflow task.
A.
A workflow task.
Answers
B.
A deep dive.
B.
A deep dive.
Answers
C.
A notable event group.
C.
A notable event group.
Answers
D.
A notable event.
D.
A notable event.
Answers
Suggested answer: C

Explanation:

It's a deduplicated group of notable events occurring as part of a larger sequence, or an incident or period considered in isolation.

An episode is a deduplicated group of notable events occurring as part of a larger sequence, or an incident or period considered in isolation. An episode helps you reduce alert noise and focus on the most important issues affecting your IT services. An episode is created by an aggregation policy, which is a set of rules that determines how to group notable events based on certain criteria, such as severity, source, title, and so on. You can use episode review to view, manage, and resolve episodes in ITSI. The statement that defines an episode is:

C) A notable event group. This is true because an episode is composed of one or more notable events that are related by some common factor.

The other options are not definitions of an episode because:

A) A workflow task. This is not true because a workflow task is an action that you can perform on an episode, such as assigning an owner, changing the status, adding comments, and so on.

B) A deep dive. This is not true because a deep dive is a dashboard that allows you to analyze the historical trends and anomalies of your KPIs and metrics in ITSI.

D) A notable event. This is not true because a notable event is an alert generated by ITSI based on certain conditions or correlations, not a group of alerts.

Which index will contain useful error messages when troubleshooting ITSI issues?

A.
_introspection
A.
_introspection
Answers
B.
_internal
B.
_internal
Answers
C.
itsi_summary
C.
itsi_summary
Answers
D.
itsi_notable_audit
D.
itsi_notable_audit
Answers
Suggested answer: B

Explanation:

The index that will contain useful error messages when troubleshooting ITSI issues is:

B) _internal. This is true because the _internal index contains logs and metrics generated by Splunk processes, such as splunkd and metrics.log. These logs can help you diagnose problems with your Splunk environment, including ITSI components and features.

The other indexes will not contain useful error messages because:

A) _introspection. This is not true because the _introspection index contains data about Splunk resource usage, such as CPU, memory, disk space, and so on. These data can help you monitor the performance and health of your Splunk environment, but not the error messages.

C) itsi_summary. This is not true because the itsi_summary index contains summarized data for your KPIs and services, such as health scores, severity levels, threshold values, and so on. These data can help you analyze the trends and anomalies of your IT services, but not the error messages.

D) itsi_notable_audit. This is not true because the itsi_notable_audit index contains audit data for your notable events and episodes, such as creation time, owner

Which index is used to store KPI values?

A.
itsi_summary_metrics
A.
itsi_summary_metrics
Answers
B.
itsi_metrics
B.
itsi_metrics
Answers
C.
itsi_service_health
C.
itsi_service_health
Answers
D.
itsi_summary
D.
itsi_summary
Answers
Suggested answer: A

Explanation:

The IT Service Intelligence (ITSI) metrics summary index,itsi_summary_metrics, is a metrics-based summary index that stores KPI data.

A is the correct answer because the itsi_summary_metrics index is used to store KPI values in ITSI. This index improves the performance of the searches dispatched by ITSI, particularly for very large environments. Every KPI is summarized in both the itsi_summary events index and the itsi_summary_metrics metrics index.

Reference:Overview of ITSI indexes

Total 90 questions
Go to page: of 9