ExamGecko
Home Home / Microsoft / DP-203

Microsoft DP-203 Practice Test - Questions Answers, Page 30

Question list
Search
Search

List of questions

Search

Related questions











HOTSPOT

You have an Azure subscription that contains an Azure Cosmos DB analytical store and an Azure Synapse Analytics workspace named WS 1. WS1 has a serverless SQL pool name Pool1.

You execute the following query by using Pool1.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.


Question 291
Correct answer: Question 291

Note: This question it part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Data Lake Storage account that contains a staging zone.

You need to design a daily process to ingest incremental data *rom the staging zone, transform the data by executing an R script and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes a mapping data flow, and then inserts the data into the data warehouse.

Does this meet the goal?

A.
Yes
A.
Yes
Answers
B.
NO
B.
NO
Answers
Suggested answer: A

You have an Azure Synapse Analytics dedicated SQL pool.

You plan to create a fact table named Table1 that will contain a clustered columnstore index.

You need to optimize data compression and query performance for Table1.

What is the minimum number of rows that Table1 should contain before you create partitions?

A.
100.000
A.
100.000
Answers
B.
600,000
B.
600,000
Answers
C.
1 million
C.
1 million
Answers
D.
60 million
D.
60 million
Answers
Suggested answer: A

You have an Azure subscription that contains an Azure Data Factory data pipeline named Pipeline1, a Log Analytics workspace named LA1, and a storage account named account1.

You need to retain pipeline-run data for 90 days. The solution must meet the following requirements:

* The pipeline-run data must be removed automatically after 90 days.

* Ongoing costs must be minimized.

Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A.
Configure Pipeline1 to send logs to LA1.
A.
Configure Pipeline1 to send logs to LA1.
Answers
B.
From the Diagnostic settings (classic) settings of account1. set the retention period to 90 days.
B.
From the Diagnostic settings (classic) settings of account1. set the retention period to 90 days.
Answers
C.
Configure Pipeline1 to send logs to account1.
C.
Configure Pipeline1 to send logs to account1.
Answers
D.
From the Data Retention settings of LA1, set the data retention period to 90 days.
D.
From the Data Retention settings of LA1, set the data retention period to 90 days.
Answers
Suggested answer: A, B

HOTSPOT

In Azure Data Factory, you have a schedule trigger that is scheduled in Pacific Time.

Pacific Time observes daylight saving time.

The trigger has the following JSON file.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented.

NOTE: Each correct selection is worth one point.


Question 295
Correct answer: Question 295

DRAG DROP

You have an Azure Synapse Analytics dedicated SQL pool named SQL1 that contains a hash-distributed fact table named Table1.

You need to recreate Table1 and add a new distribution column. The solution must maximize the availability of data.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Question 296
Correct answer: Question 296

Explanation:

Drop the indexes of Table1.

Create a new table named Table 1v2 by running CTAS

Rename Table1 as Table1_old.

Rename Table 1v2 as Table1.


You have an Azure data factory that connects to a Microsoft Purview account. The data 'factory is registered in Microsoft Purview.

You update a Data Factory pipeline.

You need to ensure that the updated lineage is available in Microsoft Purview.

What should you do first?

A.
Disconnect the Microsoft Purview account from the data factory.
A.
Disconnect the Microsoft Purview account from the data factory.
Answers
B.
Locate the related asset in the Microsoft Purview portal.
B.
Locate the related asset in the Microsoft Purview portal.
Answers
C.
Execute an Azure DevOps build pipeline.
C.
Execute an Azure DevOps build pipeline.
Answers
D.
Execute the pipeline.
D.
Execute the pipeline.
Answers
Suggested answer: D

You have an Azure Synapse Analytics dedicated SQL pool named Pool1.

Pool! contains two tables named SalesFact_Stagmg and SalesFact. Both tables have a matching number of partitions, all of which contain data.

You need to load data from SalesFact_Staging to SalesFact by switching a partition.

What should you specify when running the alter TABLE statement?

A.
WITH NOCHECK
A.
WITH NOCHECK
Answers
B.
WITH (TRUNCATE.TASGET = ON)
B.
WITH (TRUNCATE.TASGET = ON)
Answers
C.
WITH (TRACK.COLUMNS. UPOATED =ON)
C.
WITH (TRACK.COLUMNS. UPOATED =ON)
Answers
D.
WITH CHECK
D.
WITH CHECK
Answers
Suggested answer: B

You have an Azure subscription that contains an Azure Synapse Analytics workspace and a user named Used.

You need to ensure that User1 can review the Azure Synapse Analytics database templates from the gallery. The solution must follow the principle of least privilege.

Which role should you assign to User1?

A.
Synapse User
A.
Synapse User
Answers
B.
Synapse Contributor
B.
Synapse Contributor
Answers
C.
Storage blob Data Contributor
C.
Storage blob Data Contributor
Answers
D.
Synapse Administrator
D.
Synapse Administrator
Answers
Suggested answer: A

HOTSPOT

You have Azure Data Factory configured with Azure Repos Git integration. The collaboration branch and the publish branch are set to the default values.

You have a pipeline named pipeline 1.

You build a new version of pipeline1 in a branch named feature 1.

From the Data Factory Studio, you select Publish

The source code of which branch will be built, and which branch will contain the output of the Azure Resource Manager (ARM) template? To answer, select the appropriate options in the answer area.


Question 300
Correct answer: Question 300
Total 320 questions
Go to page: of 32