Microsoft DP-203 Practice Test - Questions Answers, Page 29
List of questions
Related questions
Note: The question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it As a result these questions will not appear in the review screen. You have an Azure Data Lake Storage account that contains a staging zone.
You need to design a dairy process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.
Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes a
mapping data low. and then inserts the data into the data warehouse.
Does this meet the goal?
You have an enterprise data warehouse in Azure Synapse Analytics. You need to monitor the data warehouse to identify whether you must scale up to a higher service level to accommodate the current workloads Which is the best metric to monitor? More than one answer choice may achieve the goal. Select the BEST answer.
You have two Azure Blob Storage accounts named account1 and account2?
You plan to create an Azure Data Factory pipeline that will use scheduled intervals to replicate newly created or modified blobs from account1 to account?
You need to recommend a solution to implement the pipeline. The solution must meet the following requirements:
* Ensure that the pipeline only copies blobs that were created of modified since the most recent replication event.
* Minimize the effort to create the pipeline.
What should you recommend?
HOTSPOT
You have an Azure data factory.
You execute a pipeline that contains an activity named Activity1. Activity1 produces the following output.
For each of the following statements select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
HOTSPOT
You have an Azure data factory that has the Git repository settings shown in the following exhibit.
Use the drop-down menus to select the answer choose that completes each statement based on the information presented in the graphic.
NOTE: Each correct answer is worth one point.
HOTSPOT
You have an Azure subscription that contains the resources shown in the following table.
You need to ingest the Parquet files from storage1 to SQL1 by using pipeline1. The solution must meet the following requirements:
* Minimize complexity.
* Ensure that additional columns in the files are processed as strings.
* Ensure that files containing additional columns are processed successfully.
How should you configure pipeline1? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
You have an Azure data factor/ connected to a Git repository that contains the following branches:
* mam: Collaboration branch
* abc: Feature branch
* xyz: Feature branch
You save charges to a pipeline in the xyz branch.
You need to publish the changes to the live service
What should you do first?
You have an Azure Synapse Analytics dedicated SQL pool named Pcol1. Pool1 contains a table named tablet
You load 5 TB of data into table1.
You need to ensure that column store compression is maximized for table1.
Which statement should you execute?
DRAG DROP
you have a project in Azure DevOps that contains a repository named Repo1. Repo1 contains a branch named main.
You create a new Azure Synapse workspace named Workspace1.
You need to create data processing pipelines in Workspace1. The solution must meet the following requirements:
* Pipeline artifacts must be stored in Repo1.
* Source control must be provided for pipeline artifacts.
* All development must be performed in a feature branch.
which four actions should you perform in sequence in Synapse Studio? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
You have an Azure Data Factory pipeline named pipeline1 that includes a Copy activity named Copy1. Copy1 has the following configurations:
* The source of Copy1 is a table in an on-premises Microsoft SQL Server instance that is accessed by using a linked service connected via a self-hosted integration runtime.
* The sink of Copy1 uses a table in an Azure SQL database that is accessed by using a linked service connected via an Azure integration runtime.
You need to maximize the amount of compute resources available to Copy1. The solution must minimize administrative effort.
What should you do?
Question