ExamGecko
Home Home / Microsoft / DP-203

Microsoft DP-203 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











Note: The question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it As a result these questions will not appear in the review screen. You have an Azure Data Lake Storage account that contains a staging zone.

You need to design a dairy process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes a

mapping data low. and then inserts the data into the data warehouse.

Does this meet the goal?

A.
Yes
A.
Yes
Answers
B.
No
B.
No
Answers
Suggested answer: B

Explanation:


You have an enterprise data warehouse in Azure Synapse Analytics. You need to monitor the data warehouse to identify whether you must scale up to a higher service level to accommodate the current workloads Which is the best metric to monitor? More than one answer choice may achieve the goal. Select the BEST answer.

A.
Data 10 percentage
A.
Data 10 percentage
Answers
B.
CPU percentage
B.
CPU percentage
Answers
C.
DWU used
C.
DWU used
Answers
D.
DWU percentage
D.
DWU percentage
Answers
Suggested answer: C

You have two Azure Blob Storage accounts named account1 and account2?

You plan to create an Azure Data Factory pipeline that will use scheduled intervals to replicate newly created or modified blobs from account1 to account?

You need to recommend a solution to implement the pipeline. The solution must meet the following requirements:

* Ensure that the pipeline only copies blobs that were created of modified since the most recent replication event.

* Minimize the effort to create the pipeline.

What should you recommend?

A.
Create a pipeline that contains a flowlet.
A.
Create a pipeline that contains a flowlet.
Answers
B.
Create a pipeline that contains a Data Flow activity.
B.
Create a pipeline that contains a Data Flow activity.
Answers
C.
Run the Copy Data tool and select Metadata-driven copy task.
C.
Run the Copy Data tool and select Metadata-driven copy task.
Answers
D.
Run the Copy Data tool and select Built-in copy task.
D.
Run the Copy Data tool and select Built-in copy task.
Answers
Suggested answer: A

HOTSPOT

You have an Azure data factory.

You execute a pipeline that contains an activity named Activity1. Activity1 produces the following output.

For each of the following statements select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.


Question 284
Correct answer: Question 284

HOTSPOT

You have an Azure data factory that has the Git repository settings shown in the following exhibit.

Use the drop-down menus to select the answer choose that completes each statement based on the information presented in the graphic.

NOTE: Each correct answer is worth one point.


Question 285
Correct answer: Question 285

HOTSPOT

You have an Azure subscription that contains the resources shown in the following table.

You need to ingest the Parquet files from storage1 to SQL1 by using pipeline1. The solution must meet the following requirements:

* Minimize complexity.

* Ensure that additional columns in the files are processed as strings.

* Ensure that files containing additional columns are processed successfully.

How should you configure pipeline1? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 286
Correct answer: Question 286

Explanation:


You have an Azure data factor/ connected to a Git repository that contains the following branches:

* mam: Collaboration branch

* abc: Feature branch

* xyz: Feature branch

You save charges to a pipeline in the xyz branch.

You need to publish the changes to the live service

What should you do first?

A.
Push the code to a remote origin.
A.
Push the code to a remote origin.
Answers
B.
Publish the data factory.
B.
Publish the data factory.
Answers
C.
Create a pull request to merge the changes into the abc branch.
C.
Create a pull request to merge the changes into the abc branch.
Answers
D.
Create a pull request to merge the changes into the main branch.
D.
Create a pull request to merge the changes into the main branch.
Answers
Suggested answer: D

You have an Azure Synapse Analytics dedicated SQL pool named Pcol1. Pool1 contains a table named tablet

You load 5 TB of data into table1.

You need to ensure that column store compression is maximized for table1.

Which statement should you execute?

A.
ALTER INDEX ALL on table REBUILD
A.
ALTER INDEX ALL on table REBUILD
Answers
B.
DBCC DBREINOEX (table)
B.
DBCC DBREINOEX (table)
Answers
C.
DBCC IIDEXDEFRAG (pool1, table1)
C.
DBCC IIDEXDEFRAG (pool1, table1)
Answers
D.
ALTER INDEX ALL on table REORGANIZE
D.
ALTER INDEX ALL on table REORGANIZE
Answers
Suggested answer: B

DRAG DROP

you have a project in Azure DevOps that contains a repository named Repo1. Repo1 contains a branch named main.

You create a new Azure Synapse workspace named Workspace1.

You need to create data processing pipelines in Workspace1. The solution must meet the following requirements:

* Pipeline artifacts must be stored in Repo1.

* Source control must be provided for pipeline artifacts.

* All development must be performed in a feature branch.

which four actions should you perform in sequence in Synapse Studio? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Question 289
Correct answer: Question 289

Explanation:

Configure a code repository and select Repo1.

Create a new branch.

Create pipeline artifacts and save them in the new branch.

Create a pull request to merge the contents of the main branch into the new branch.


You have an Azure Data Factory pipeline named pipeline1 that includes a Copy activity named Copy1. Copy1 has the following configurations:

* The source of Copy1 is a table in an on-premises Microsoft SQL Server instance that is accessed by using a linked service connected via a self-hosted integration runtime.

* The sink of Copy1 uses a table in an Azure SQL database that is accessed by using a linked service connected via an Azure integration runtime.

You need to maximize the amount of compute resources available to Copy1. The solution must minimize administrative effort.

What should you do?

A.
Scale up the data flow runtime of the Azure integration runtime.
A.
Scale up the data flow runtime of the Azure integration runtime.
Answers
B.
Scale up the data flow runtime of the Azure integration runtime and scale out the self-hosted integration runtime.
B.
Scale up the data flow runtime of the Azure integration runtime and scale out the self-hosted integration runtime.
Answers
C.
Scale out the self-hosted integration runtime.
C.
Scale out the self-hosted integration runtime.
Answers
Suggested answer: A
Total 320 questions
Go to page: of 32