ExamGecko
Home Home / Microsoft / DP-203

Microsoft DP-203 Practice Test - Questions Answers, Page 28

Question list
Search
Search

List of questions

Search

Related questions











You have the Azure Synapse Analytics pipeline shown in the following exhibit.

You need to add a set variable activity to the pipeline to ensure that after the pipeline’s completion, the status of the pipeline is always successful.

What should you configure for the set variable activity?

A.
a success dependency on the Business Activity That Fails activity
A.
a success dependency on the Business Activity That Fails activity
Answers
B.
a failure dependency on the Upon Failure activity
B.
a failure dependency on the Upon Failure activity
Answers
C.
a skipped dependency on the Upon Success activity
C.
a skipped dependency on the Upon Success activity
Answers
D.
a skipped dependency on the Upon Failure activity
D.
a skipped dependency on the Upon Failure activity
Answers
Suggested answer: B

Explanation:

A failure dependency means that the activity will run only if the previous activity fails. In this case,

setting a failure dependency on the Upon Failure activity will ensure that the set variable activity will

run after the pipeline fails and set the status of the pipeline to successful.

https://www.validexamdumps.com


You are building a data flow in Azure Data Factory that upserts data into a table in an Azure Synapse Analytics dedicated SQL pool.You need to add a transformation to the data flow. The transformation must specify logic indicating when a row from the input data must be upserted into the sink.

Which type of transformation should you add to the data flow?

A.
join
A.
join
Answers
B.
select
B.
select
Answers
C.
surrogate key
C.
surrogate key
Answers
D.
alter row
D.
alter row
Answers
Suggested answer: D

Explanation:

The alter row transformation allows you to specify insert, update, delete, and upsert policies on rows based on expressions. You can use the alter row transformation to perform upserts on a sink table by matching on a key column and setting the appropriate row policy

HOTSPOT

You are incrementally loading data into fact tables in an Azure Synapse Analytics dedicated SQL pool.

Each batch of incoming data is staged before being loaded into the fact tables.

You need to ensure that the incoming data is staged as quickly as possible.

How should you configure the staging tables? To answer, select the appropriate options in theanswer area.

Question 273
Correct answer: Question 273

Explanation:

Round-robin distribution is recommended for staging tables because it distributes data evenly across all the distributions without requiring a hash column. This can improve the speed of data loading and avoid data skew. Heap tables are recommended for staging tables because they do not have any indexes or partitions that can slow down the data loading process. Heap tables are also easier to truncate and reload than clustered index or columnstore index tables.

DRAG DROP

You have an Azure Synapse Analytics serverless SQ1 pool.

You have an Azure Data Lake Storage account named aols1 that contains a public container named container1 The container 1 container contains a folder named folder 1.

You need to query the top 100 rows of all the CSV files in folder 1.

How shouk1 you complete the query? To answer, drag the appropriate values to the correct targets.

Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Question 274
Correct answer: Question 274

HOTSPOT

You have an Azure Synapse Analytics dedicated SQL pool.

You need to monitor the database for long-running queries and identify which queries are waiting on resources

Which dynamic management view should you use for each requirement? To answer, select the appropriate options in the answer area.

NOTE; Each correct answer is worth one point.

Question 275
Correct answer: Question 275

HOTSPOT

You have an Azure subscription that contains the resources shown in the following table.

You need to ensure that you can Spark notebooks in ws1. The solution must ensure secrets from kv1 by using UAMI1. What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 276
Correct answer: Question 276

You have an Azure Synapse Analytics dedicated SQL pod.

You need to create a pipeline that will execute a stored procedure in the dedicated SQL pool and use

the returned result set as the input (or a downstream activity. The solution must minimize development effort.

Which Type of activity should you use in the pipeline?

A.
Notebook
A.
Notebook
Answers
B.
U-SQL
B.
U-SQL
Answers
C.
Script
C.
Script
Answers
D.
Stored Procedure
D.
Stored Procedure
Answers
Suggested answer: D

You have an Azure subscription that contains an Azure Synapse Analytics workspace named ws1 and an Azure Cosmos D6 database account named Cosmos1 Costmos1 contains a container named container 1 and ws1 contains a serverless1 SQL pool.

you need to ensure that you can Query the data in container by using the serverless1 SQL pool.

Which three actions should you perform? Each correct answer presents part of the solution NOTE: Each correct selection is worth one point.

A.
Enable Azure Synapse Link for Cosmos1
A.
Enable Azure Synapse Link for Cosmos1
Answers
B.
Disable the analytical store for container1.
B.
Disable the analytical store for container1.
Answers
C.
In ws1. create a linked service that references Cosmos1
C.
In ws1. create a linked service that references Cosmos1
Answers
D.
Enable the analytical store for container1
D.
Enable the analytical store for container1
Answers
E.
Disable indexing for container1
E.
Disable indexing for container1
Answers
Suggested answer: A, C, D

HOTSPOT

You are developing an Azure Synapse Analytics pipeline that will include a mapping data flow named Dataflow1. Dataflow1 will read customer data from an external source and use a Type 1 slowly changing dimension (SCO) when loading the data into a table named DimCustomer1 in an Azure Synapse Analytics dedicated SQL pool.

You need to ensure that Dataflow1 can perform the following tasks:

* Detect whether the data of a given customer has changed in the DimCustomer table.

• Perform an upsert to the DimCustomer table.

Which type of transformation should you use for each task? To answer, select the appropriate options in the answer area

NOTE; Each correct selection is worth one point.

Question 279
Correct answer: Question 279

You have an Azure data factory named ADM that contains a pipeline named Pipeline1. Pipeline1 must execute every 30 minutes with a 15-minute offset.

Vou need to create a trigger for Pipeline1. The trigger must meet the following requirements:

• Backfill data from the beginning of the day to the current time.

• If Pipeline1 fairs, ensure that the pipeline can re-execute within the same 30-mmute period.

• Ensure that only one concurrent pipeline execution can occur.

• Minimize development and configuration effort

Which type of trigger should you create?

A.
schedule
A.
schedule
Answers
B.
event-based
B.
event-based
Answers
C.
manual
C.
manual
Answers
D.
tumbling window
D.
tumbling window
Answers
Suggested answer: D
Total 320 questions
Go to page: of 32