ExamGecko
Home Home / Microsoft / DP-203

Microsoft DP-203 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











You have an Azure data factory.

You need to examine the pipeline failures from the last 60 days. What should you use?

A.
the Activity log blade for the Data Factory resource
A.
the Activity log blade for the Data Factory resource
Answers
B.
the Monitor & Manage app in Data Factory
B.
the Monitor & Manage app in Data Factory
Answers
C.
the Resource health blade for the Data Factory resource
C.
the Resource health blade for the Data Factory resource
Answers
D.
Azure Monitor
D.
Azure Monitor
Answers
Suggested answer: D

Explanation:

Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you want to keep that data for a longer time.

Reference: https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor

You are monitoring an Azure Stream Analytics job.

The Backlogged Input Events count has been 20 for the last hour.

You need to reduce the Backlogged Input Events count.

What should you do?

A.
Drop late arriving events from the job.
A.
Drop late arriving events from the job.
Answers
B.
Add an Azure Storage account to the job.
B.
Add an Azure Storage account to the job.
Answers
C.
Increase the streaming units for the job.
C.
Increase the streaming units for the job.
Answers
D.
Stop the job.
D.
Stop the job.
Answers
Suggested answer: C

Explanation:

General symptoms of the job hitting system resource limits include:

If the backlog event metric keeps increasing, it's an indicator that the system resource is constrained (either because of output sink throttling, or high CPU). Note: Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently nonzero, you should scale out your job: adjust Streaming Units.

Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-scale-jobs https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring

You are designing an Azure Databricks interactive cluster. The cluster will be used infrequently and will be configured for auto-termination. You need to use that the cluster configuration is retained indefinitely after the cluster is terminated. The solution must minimize costs. What should you do?

A.
Pin the cluster.
A.
Pin the cluster.
Answers
B.
Create an Azure runbook that starts the cluster every 90 days.
B.
Create an Azure runbook that starts the cluster every 90 days.
Answers
C.
Terminate the cluster manually when processing completes.
C.
Terminate the cluster manually when processing completes.
Answers
D.
Clone the cluster after it is terminated.
D.
Clone the cluster after it is terminated.
Answers
Suggested answer: A

Explanation:

Azure Databricks retains cluster configuration information for up to 70 all-purpose clusters terminated in the last 30 days and up to 30 job clusters recently terminated by the job scheduler. To keep an all-purpose cluster configuration even after it has been terminated for more than 30 days, an administrator can pin a cluster to the cluster list.

Reference:

https://docs.microsoft.com/en-us/azure/databricks/clusters/

You have an Azure data solution that contains an enterprise data warehouse in Azure Synapse Analytics named DW1. Several users execute ad hoc queries to DW1 concurrently.

You regularly perform automated data loads to DW1.

You need to ensure that the automated data loads have enough memory available to complete quickly and successfully when the adhoc queries run. What should you do?

A.
Hash distribute the large fact tables in DW1 before performing the automated data loads.
A.
Hash distribute the large fact tables in DW1 before performing the automated data loads.
Answers
B.
Assign a smaller resource class to the automated data load queries.
B.
Assign a smaller resource class to the automated data load queries.
Answers
C.
Assign a larger resource class to the automated data load queries.
C.
Assign a larger resource class to the automated data load queries.
Answers
D.
Create sampled statistics for every column in each table of DW1.
D.
Create sampled statistics for every column in each table of DW1.
Answers
Suggested answer: C

Explanation:

The performance capacity of a query is determined by the user's resource class. Resource classes are pre-determined resource limits in Synapse SQL pool that govern compute resources and concurrency for query execution. Resource classes can help you configure resources for your queries by setting limits on the number of queries that run concurrently and on the compute-resources assigned to each query. There's a trade-off between memory and concurrency. Smaller resource classes reduce the maximum memory per query, but increase concurrency. Larger resource classes increase the maximum memory per query, but reduce concurrency.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management

You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and a database named DB1. DB1 contains a fact table named Table1. You need to identify the extent of the data skew in Table1.

What should you do in Synapse Studio?

A.
Connect to the built-in pool and run DBCC PDW_SHOWSPACEUSED.
A.
Connect to the built-in pool and run DBCC PDW_SHOWSPACEUSED.
Answers
B.
Connect to the built-in pool and run DBCC CHECKALLOC.
B.
Connect to the built-in pool and run DBCC CHECKALLOC.
Answers
C.
Connect to Pool1 and query sys.dm_pdw_node_status.
C.
Connect to Pool1 and query sys.dm_pdw_node_status.
Answers
D.
Connect to Pool1 and query sys.dm_pdw_nodes_db_partition_stats.
D.
Connect to Pool1 and query sys.dm_pdw_nodes_db_partition_stats.
Answers
Suggested answer: D

Explanation:

Microsoft recommends use of sys.dm_pdw_nodes_db_partition_stats to analyze any skewness in the data.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/cheat-sheet

You have a SQL pool in Azure Synapse.

You discover that some queries fail or take a long time to complete. You need to monitor for transactions that have rolled back.

Which dynamic management view should you query?

A.
sys.dm_pdw_request_steps
A.
sys.dm_pdw_request_steps
Answers
B.
sys.dm_pdw_nodes_tran_database_transactions
B.
sys.dm_pdw_nodes_tran_database_transactions
Answers
C.
sys.dm_pdw_waits
C.
sys.dm_pdw_waits
Answers
D.
sys.dm_pdw_exec_sessions
D.
sys.dm_pdw_exec_sessions
Answers
Suggested answer: B

Explanation:

You can use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in SQL pool. If your queries are failing or taking a long time to proceed, you can check and monitor if you have any transactions rolling back. Example:

-- Monitor rollback

SELECT

SUM(CASE WHEN t.database_transaction_next_undo_lsn IS NOT NULL THEN 1 ELSE 0 END), t.pdw_node_id, nod.[type] FROM sys.dm_pdw_nodes_tran_database_transactions t

JOIN sys.dm_pdw_nodes nod ON t.pdw_node_id = nod.pdw_node_id GROUP BY t.pdw_node_id, nod.[type]

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-transaction-log-rollback

You are monitoring an Azure Stream Analytics job.

You discover that the Backlogged Input Events metric is increasing slowly and is consistently non-zero. You need to ensure that the job can handle all the events.

What should you do?

A.
Change the compatibility level of the Stream Analytics job.
A.
Change the compatibility level of the Stream Analytics job.
Answers
B.
Increase the number of streaming units (SUs).
B.
Increase the number of streaming units (SUs).
Answers
C.
Remove any named consumer groups from the connection and use $default.
C.
Remove any named consumer groups from the connection and use $default.
Answers
D.
Create an additional output stream for the existing input stream.
D.
Create an additional output stream for the existing input stream.
Answers
Suggested answer: B

Explanation:

Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job. You should increase the Streaming Units. Note: Streaming Units (SUs) represents the computing resources that are allocated to execute a Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated for your job.

Reference:

https://docs.microsoft.com/bs-cyrl-ba/azure/stream-analytics/stream-analytics-monitoring

You are designing an inventory updates table in an Azure Synapse Analytics dedicated SQL pool. The table will have a clustered columnstore index and will include the following columns:

You identify the following usage patterns:

Analysts will most commonly analyze transactions for a warehouse. Queries will summarize by product category type, date, and/or inventory event type.

You need to recommend a partition strategy for the table to minimize query times.

On which column should you partition the table?

A.
EventTypeID
A.
EventTypeID
Answers
B.
ProductCategoryTypeID
B.
ProductCategoryTypeID
Answers
C.
EventDate
C.
EventDate
Answers
D.
WarehouseID
D.
WarehouseID
Answers
Suggested answer: D

Explanation:

The number of records for each warehouse is big enough for a good partitioning. Note: Table partitions enable you to divide your data into smaller groups of data. In most cases, table partitions are created on a date column. When creating partitions on clustered columnstore tables, it is important to consider how many rows belong to each partition. For optimal compression and performance of clustered columnstore tables, a minimum of 1 million rows per distribution and partition is needed. Before partitions are created, dedicated SQL pool already divides each table into 60 distributed databases.

You are designing a star schema for a dataset that contains records of online orders. Each record includes an order date, an order due date, and an order ship date. You need to ensure that the design provides the fastest query times of the records when querying for arbitrary date ranges and aggregating by fiscal calendar attributes. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A.
Create a date dimension table that has a DateTime key.
A.
Create a date dimension table that has a DateTime key.
Answers
B.
Use built-in SQL functions to extract date attributes.
B.
Use built-in SQL functions to extract date attributes.
Answers
C.
Create a date dimension table that has an integer key in the format of YYYYMMDD.
C.
Create a date dimension table that has an integer key in the format of YYYYMMDD.
Answers
D.
In the fact table, use integer columns for the date fields.
D.
In the fact table, use integer columns for the date fields.
Answers
E.
Use DateTime columns for the date fields.
E.
Use DateTime columns for the date fields.
Answers
Suggested answer: C, D

Explanation:


A company purchases IoT devices to monitor manufacturing machinery. The company uses an Azure IoT Hub to communicate with the IoT devices. The company must be able to monitor the devices in real-time. You need to design the solution.

What should you recommend?

A.
Azure Analysis Services using Azure Portal
A.
Azure Analysis Services using Azure Portal
Answers
B.
Azure Analysis Services using Azure PowerShell
B.
Azure Analysis Services using Azure PowerShell
Answers
C.
Azure Stream Analytics cloud job using Azure Portal
C.
Azure Stream Analytics cloud job using Azure Portal
Answers
D.
Azure Data Factory instance using Azure Portal
D.
Azure Data Factory instance using Azure Portal
Answers
Suggested answer: D

Explanation:

Stream Analytics is a cost-effective event processing engine that helps uncover real-time insights from devices, sensors, infrastructure, applications and data quickly and easily. Monitor and manage Stream Analytics resources with Azure PowerShell cmdlets and powershell scripting that execute basic Stream Analytics tasks. https://cloudblogs.microsoft.com/sqlserver/2014/10/29/microsoft-adds-iot-streaming-analytics-data- production-and-workflow-services-to-azure/


Total 320 questions
Go to page: of 32