ExamGecko
Home / Microsoft / DP-203 / List of questions
Ask Question

Microsoft DP-203 Practice Test - Questions Answers, Page 5

List of questions

Question 41

Report
Export
Collapse

You have a SQL pool in Azure Synapse.

A user reports that queries against the pool take longer than expected to complete. You determine that the issue relates to queried columnstore segments. You need to add monitoring to the underlying storage to help diagnose the issue. Which two metrics should you monitor? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Snapshot Storage Size
Snapshot Storage Size
Cache used percentage
Cache used percentage
DWU Limit
DWU Limit
Cache hit percentage
Cache hit percentage
Suggested answer: B, D

Explanation:

D: Cache hit percentage: (cache hits / cache miss) * 100 where cache hits is the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes

B: (cache used / cache capacity) * 100 where cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes Incorrect Asnwers:

C: DWU limit: Service level objective of the data warehouse.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity

asked 02/10/2024
Winston Seedorf
31 questions

Question 42

Report
Export
Collapse

You manage an enterprise data warehouse in Azure Synapse Analytics. Users report slow performance when they run commonly used queries. Users do not report performance changes for infrequently used queries. You need to monitor resource utilization to determine the source of the performance issues. Which metric should you monitor?

DWU percentage
DWU percentage
Cache hit percentage
Cache hit percentage
DWU limit
DWU limit
Data IO percentage
Data IO percentage
Suggested answer: B

Explanation:

Monitor and troubleshoot slow query performance by determining whether your workload is optimally leveraging the adaptive cache for dedicated SQL pools.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache

asked 02/10/2024
David Clark
39 questions

Question 43

Report
Export
Collapse

You have an Azure Databricks resource.

You need to log actions that relate to changes in compute for the Databricks resource. Which Databricks services should you log?

clusters
clusters
workspace
workspace
DBFS
DBFS
SSH
SSH
jobs
jobs
Suggested answer: B

Explanation:

Databricks provides access to audit logs of activities performed by Databricks users, allowing your enterprise to monitor detailed Databricks usage patterns. There are two types of logs:

Workspace-level audit logs with workspace-level events. Account-level audit logs with account-level events.

Reference: https://docs.databricks.com/administration-guide/account-settings/audit-logs.html

asked 02/10/2024
Justin NJOCK
45 questions

Question 44

Report
Export
Collapse

You are designing a highly available Azure Data Lake Storage solution that will include geo-zone-redundant storage (GZRS). You need to monitor for replication delays that can affect the recovery point objective (RPO). What should you include in the monitoring solution?

5xx: Server Error errors
5xx: Server Error errors
Average Success E2E Latency
Average Success E2E Latency
availability
availability
Last Sync Time
Last Sync Time
Suggested answer: D

Explanation:

Because geo-replication is asynchronous, it is possible that data written to the primary region has not yet been written to the secondary region at the time an outage occurs. The Last Sync Time property indicates the last time that data from the primary region was written successfully to the secondary region. All writes made to the primary region before the last sync time are available to be read from the secondary location. Writes made to the primary region after the last sync time property may or may not be available for reads yet.

Reference:

https://docs.microsoft.com/en-us/azure/storage/common/last-sync-time-get

asked 02/10/2024
MARCOS ALAMOS
33 questions

Question 45

Report
Export
Collapse

You configure monitoring from an Azure Synapse Analytics implementation. The implementation uses PolyBase to load data from comma-separated value (CSV) files stored in Azure Data Lake Storage Gen2 using an external table. Files with an invalid schema cause errors to occur.

You need to monitor for an invalid schema error.

For which error should you monitor?

EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [com.microsoft.polybase.client.KerberosSecureLogin] occurred while accessing external file.'
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [com.microsoft.polybase.client.KerberosSecureLogin] occurred while accessing external file.'
Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11" for linked server "(null)". Query aborted- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected outof total 1 rows processed.
Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11" for linked server "(null)". Query aborted- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected outof total 1 rows processed.
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [Unable to instantiate LoginClass] occurred while accessing external file.'
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [Unable to instantiate LoginClass] occurred while accessing external file.'
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [No FileSystem for scheme: wasbs] occurred while accessing external file.'
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [No FileSystem for scheme: wasbs] occurred while accessing external file.'
Suggested answer: B

Explanation:

Error message: Cannot execute the query "Remote Query"

Possible Reason:

The reason this error happens is because each file has different schema. The PolyBase external table DDL when pointed to a directory recursively reads all the files in that directory. When a column or data type mismatch happens, this error could be seen in SSMS.

Reference:

https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-errors-and-possible-solutions

asked 02/10/2024
Innos Phoku
41 questions

Question 46

Report
Export
Collapse

You have an Azure Synapse Analytics dedicated SQL pool.

You run PDW_SHOWSPACEUSED('dbo.FactInternetSales'); and get the results shown in the following table.

Microsoft DP-203 image Question 20 89634 10022024015850000000

Which statement accurately describes the dbo.FactInternetSales table?

All distributions contain data.
All distributions contain data.
The table contains less than 10,000 rows.
The table contains less than 10,000 rows.
The table uses round-robin distribution.
The table uses round-robin distribution.
The table is skewed.
The table is skewed.
Suggested answer: D
asked 02/10/2024
Borat Kajratov
49 questions

Question 47

Report
Export
Collapse

You have two fact tables named Flight and Weather. Queries targeting the tables will be based on the join between the following columns.

Microsoft DP-203 image Question 21 89635 10022024015850000000

You need to recommend a solution that maximizes query performance.

What should you include in the recommendation?

In the tables use a hash distribution of ArrivalDateTime and ReportDateTime.
In the tables use a hash distribution of ArrivalDateTime and ReportDateTime.
In the tables use a hash distribution of ArrivalAirportID and AirportID.
In the tables use a hash distribution of ArrivalAirportID and AirportID.
In each table, create an IDENTITY column.
In each table, create an IDENTITY column.
In each table, create a column as a composite of the other two columns in the table.
In each table, create a column as a composite of the other two columns in the table.
Suggested answer: B

Explanation:

Hash-distribution improves query performance on large fact tables. Incorrect Answers:

A: Do not use a date column for hash distribution. All data for the same date lands in the same distribution. If several users are all filtering on the same date, then only 1 of the 60 distributions do all the processing work.

asked 02/10/2024
Péter Szittya
43 questions

Question 48

Report
Export
Collapse

You have several Azure Data Factory pipelines that contain a mix of the following types of activities:

Wrangling data flow

Notebook

Copy Jar

Which two Azure services should you use to debug the activities? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point

Azure Synapse Analytics
Azure Synapse Analytics
Azure HDInsight
Azure HDInsight
Azure Machine Learning
Azure Machine Learning
Azure Data Factory
Azure Data Factory
Azure Databricks
Azure Databricks
Suggested answer: B, D
asked 02/10/2024
DHANANJAY TIWARI
34 questions

Question 49

Report
Export
Collapse

You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and a database named DB1. DB1 contains a fact table named Table1. You need to identify the extent of the data skew in Table1.

What should you do in Synapse Studio?

Connect to the built-in pool and run sys.dm_pdw_nodes_db_partition_stats.
Connect to the built-in pool and run sys.dm_pdw_nodes_db_partition_stats.
Connect to Pool1 and run DBCC CHECKALLOC.
Connect to Pool1 and run DBCC CHECKALLOC.
Connect to the built-in pool and run DBCC CHECKALLOC.
Connect to the built-in pool and run DBCC CHECKALLOC.
Connect to Pool1 and query sys.dm_pdw_nodes_db_partition_stats.
Connect to Pool1 and query sys.dm_pdw_nodes_db_partition_stats.
Suggested answer: D

Explanation:

Microsoft recommends use of sys.dm_pdw_nodes_db_partition_stats to analyze any skewness in the data. Reference:

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-partition-stats-transact-sql https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/cheat-sheet

asked 02/10/2024
So young Jang
27 questions

Question 50

Report
Export
Collapse

A company purchases IoT devices to monitor manufacturing machinery. The company uses an Azure IoT Hub to communicate with the IoT devices. The company must be able to monitor the devices in real-time. You need to design the solution.

What should you recommend?

Azure Data Factory instance using Azure Portal
Azure Data Factory instance using Azure Portal
Azure Data Factory instance using Azure PowerShell
Azure Data Factory instance using Azure PowerShell
Azure Stream Analytics cloud job using Azure Portal
Azure Stream Analytics cloud job using Azure Portal
Azure Data Factory instance using Microsoft Visual Studio
Azure Data Factory instance using Microsoft Visual Studio
Suggested answer: A

Explanation:


asked 02/10/2024
Karol Ligęza
28 questions
Total 341 questions
Go to page: of 35
Search

Related questions