ExamGecko
Home Home / Microsoft / DP-203

Microsoft DP-203 Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











DRAG DROP

You have an Azure subscription that contains an Azure Synapse Analytics workspace named workspace1. Workspace1 connects to an Azure DevOps repository named repo1. Repo1 contains a collaboration branch named main and a development branch named branch1. Branch1 contains an Azure Synapse pipeline named pipeline1. In workspace1, you complete testing of pipeline1.

You need to schedule pipeline1 to run daily at 6 AM.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.

Question 211
Correct answer: Question 211

HOTSPOT

You have an Azure subscription that contains an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage account named storage1. Storage1 requires secure transfers. You need to create an external data source in Pool1 that will be used to read .orc files in storage1. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Question 212
Correct answer: Question 212

Explanation:

Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true&tabs=dedicated

You have an Azure subscription that contains an Azure Synapse Analytics dedicated SQL pool named SQLPool1. SQLPool1 is currently paused.

You need to restore the current state of SQLPool1 to a new SQL pool. What should you do first?

A.
Create a workspace.
A.
Create a workspace.
Answers
B.
Create a user-defined restore point.
B.
Create a user-defined restore point.
Answers
C.
Resume SQLPool1.
C.
Resume SQLPool1.
Answers
D.
Create a new SQL pool.
D.
Create a new SQL pool.
Answers
Suggested answer: B

Explanation:

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw

You are creating an Azure Data Factory data flow that will ingest data from a CSV file, cast columns to specified types of data, and insert the data into a table in an Azure Synapse Analytics dedicated SQL pool. The CSV file contains columns named username, comment and date.

The data flow already contains the following:

• A source transformation

• A Derived Column transformation to set the appropriate types of data

• A sink transformation to land the data in the pool

You need to ensure that the data flow meets the following requirements;

• All valid rows must be written to the destination table.

• Truncation errors in the comment column must be avoided proactively.

• Any rows containing comment values that will cause truncation errors upon insert must be written to a file in blob storage. Which two actions should you perform? Each correct answer presents part of the solution. NOTE:

Each correct selection is worth one point

A.
Add a select transformation that selects only the rows which will cause truncation errors.
A.
Add a select transformation that selects only the rows which will cause truncation errors.
Answers
B.
Add a sink transformation that writes the rows to a file in blob storage.
B.
Add a sink transformation that writes the rows to a file in blob storage.
Answers
C.
Add a filter transformation that filters out rows which will cause truncation errors.
C.
Add a filter transformation that filters out rows which will cause truncation errors.
Answers
D.
Add a Conditional Split transformation that separates the rows which will cause truncation errors.
D.
Add a Conditional Split transformation that separates the rows which will cause truncation errors.
Answers
Suggested answer: B, D

DRAG DROP

You have an Azure subscription that contains an Azure Databricks workspace. The workspace contains a notebook named Notebook1. In Notebook1, you create an Apache Spark DataFrame named df_sales that contains the following columns:

• Customer

• Salesperson

• Region

• Amount

You need to identify the three top performing salespersons by amount for a region named HQ. How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Question 215
Correct answer: Question 215

You have an Azure subscription that contains an Azure Data Lake Storage account named myaccount1. The myaccount1 account contains two containers named container1 and contained. The subscription is linked to an Azure Active Directory (Azure AD) tenant that contains a security group named Group1. You need to grant Group1 read access to contamer1. The solution must use the principle of least privilege. Which role should you assign to Group1?

A.
Storage Blob Data Reader for container1
A.
Storage Blob Data Reader for container1
Answers
B.
Storage Table Data Reader for container1
B.
Storage Table Data Reader for container1
Answers
C.
Storage Blob Data Reader for myaccount1
C.
Storage Blob Data Reader for myaccount1
Answers
D.
Storage Table Data Reader for myaccount1
D.
Storage Table Data Reader for myaccount1
Answers
Suggested answer: A

You have an Azure Synapse Analytics dedicated SQL pool named pool1. You need to perform a monthly audit of SQL statements that affect sensitive data. The solution must minimize administrative effort. What should you include in the solution?

A.
Microsoft Defender for SQL
A.
Microsoft Defender for SQL
Answers
B.
dynamic data masking
B.
dynamic data masking
Answers
C.
sensitivity labels
C.
sensitivity labels
Answers
D.
workload management
D.
workload management
Answers
Suggested answer: B

You have an Azure Data Factory pipeline that contains a data flow. The data flow contains the following expression.

A.
A.
Answers
Suggested answer: A

Explanation:

Answer: A

Explanation:

You have an Azure Databricks workspace and an Azure Data Lake Storage Gen2 account named storage! New files are uploaded daily to storage1.

• Incrementally process new files as they are upkorage1 as a structured streaming source. The solution must meet the following requirements:

• Minimize implementation and maintenance effort.

• Minimize the cost of processing millions of files.

• Support schema inference and schema drift.

Which should you include in the recommendation?

A.
Auto Loader
A.
Auto Loader
Answers
B.
Apache Spark FileStreamSource
B.
Apache Spark FileStreamSource
Answers
C.
COPY INTO
C.
COPY INTO
Answers
D.
Azure Data Factory
D.
Azure Data Factory
Answers
Suggested answer: D

You have an Azure Synapse Analytics dedicated SQL pool named Pool1. Pool1 contains a table named table1. You load 5 TB of data intotable1.

You need to ensure that columnstore compression is maximized for table1. Which statement should you execute?

A.
ALTER INDEX ALL on table1 REORGANIZE
A.
ALTER INDEX ALL on table1 REORGANIZE
Answers
B.
ALTER INDEX ALL on table1 REBUILD
B.
ALTER INDEX ALL on table1 REBUILD
Answers
C.
DBCC DBREINOEX (table1)
C.
DBCC DBREINOEX (table1)
Answers
D.
DBCC INDEXDEFRAG (pool1,tablel)
D.
DBCC INDEXDEFRAG (pool1,tablel)
Answers
Suggested answer: C

Explanation:


Total 320 questions
Go to page: of 32