ExamGecko
Home / Microsoft / DP-203 / List of questions
Ask Question

Microsoft DP-203 Practice Test - Questions Answers, Page 7

Add to Whishlist

List of questions

Question 61

Report Export Collapse

You have an enterprise data warehouse in Azure Synapse Analytics. Using PolyBase, you create an external table named [Ext].[Items] to query Parquet files stored in Azure Data Lake Storage Gen2 without importing the data to the data warehouse. The external table has three columns.

You discover that the Parquet files have a fourth column named ItemID. Which command should you run to add the ItemID column to the external table?

Microsoft DP-203 image Question 7 89480 10022024015849000000

Option A
Option A
Option B
Option B
Option C
Option C
Option D
Option D
Suggested answer: C
Explanation:

Incorrect Answers:

A, D: Only these Data Definition Language (DDL) statements are allowed on external tables:

CREATE TABLE and DROP TABLE

CREATE STATISTICS and DROP STATISTICS CREATE VIEW and DROP VIEW

Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql

asked 02/10/2024
Raja Tarazi
45 questions

Question 62

Report Export Collapse

You have an Azure Data Lake Storage Gen2 container that contains 100 TB of data. You need to ensure that the data in the container is available for read workloads in a secondary region if an outage occurs in the primary region. The solution must minimize costs. Which type of data redundancy should you use?

geo-redundant storage (GRS)
geo-redundant storage (GRS)
read-access geo-redundant storage (RA-GRS)
read-access geo-redundant storage (RA-GRS)
zone-redundant storage (ZRS)
zone-redundant storage (ZRS)
locally-redundant storage (LRS)
locally-redundant storage (LRS)
Suggested answer: B
Explanation:

Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is available to be read at all times, including in a situation where the primary region becomes unavailable. Incorrect Answers:

A: While Geo-redundant storage (GRS) is cheaper than Read-Access Geo-Redundant Storage (RA-GRS), GRS does NOT initiate automatic failover. C, D: Locally redundant storage (LRS) and Zone-redundant storage (ZRS) provides redundancy within a single region.

Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

asked 02/10/2024
Natalia Novikova
50 questions

Question 63

Report Export Collapse

You plan to implement an Azure Data Lake Gen 2 storage account. You need to ensure that the data lake will remain available if a data center fails in the primary Azure region. The solution must minimize costs. Which type of replication should you use for the storage account?

geo-redundant storage (GRS)
geo-redundant storage (GRS)
geo-zone-redundant storage (GZRS)
geo-zone-redundant storage (GZRS)
locally-redundant storage (LRS)
locally-redundant storage (LRS)
zone-redundant storage (ZRS)
zone-redundant storage (ZRS)
Suggested answer: D
Explanation:


Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

asked 02/10/2024
Marco Romani
43 questions

Question 64

Report Export Collapse

You are designing a fact table named FactPurchase in an Azure Synapse Analytics dedicated SQL pool. The table contains purchases from suppliers for a retail store. FactPurchase will contain the following columns.

Microsoft DP-203 image Question 10 89483 10022024015849000000

FactPurchase will have 1 million rows of data added daily and will contain three years of data.

Transact-SQL queries similar to the following query will be executed daily.

SELECT

SupplierKey, StockItemKey, IsOrderFinalized, COUNT(*)

FROM FactPurchase

WHERE DateKey >= 20210101

AND DateKey <= 20210131

GROUP By SupplierKey, StockItemKey, IsOrderFinalized

Which table distribution will minimize query times?

replicated
replicated
hash-distributed on PurchaseKey
hash-distributed on PurchaseKey
round-robin
round-robin
hash-distributed on IsOrderFinalized
hash-distributed on IsOrderFinalized
Suggested answer: B
Explanation:

Hash-distributed tables improve query performance on large fact tables. To balance the parallel processing, select a distribution column that:

Has many unique values. The column can have duplicate values. All rows with the same value are assigned to the same distribution. Since there are 60 distributions, some distributions can have > 1 unique values while others may end with zero values.

Does not have NULLs, or has only a few NULLs. Is not a date column. Incorrect Answers:

C: Round-robin tables are useful for improving loading speed.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute

asked 02/10/2024
shafinaaz hossenny
46 questions

Question 65

Report Export Collapse

Note: This question-is part of a series of questions that present the same scenario. Each question-in the series contains a unique solution that might meet the stated goals. Some question-sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question-in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Storage account that contains 100 GB of files. The files contain rows of text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB. You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse Analytics. You need to prepare the files to ensure that the data copies quickly. Solution: You convert the files to compressed delimited text files. Does this meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:

All file formats have different performance characteristics. For the fastest load, use compressed delimited text files.

Reference:

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/guidance-for-loading-data

asked 02/10/2024
ME BOX
34 questions

Question 66

Report Export Collapse

Note: This question-is part of a series of questions that present the same scenario. Each question-in the series contains a unique solution that might meet the stated goals. Some question-sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question-in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Storage account that contains 100 GB of files. The files contain rows of text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB. You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse Analytics. You need to prepare the files to ensure that the data copies quickly. Solution: You copy the files to a table that has a columnstore index. Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead convert the files to compressed delimited text files.

Reference:

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/guidance-for-loading-data

asked 02/10/2024
Luis Raul Juarez Cosio
44 questions

Question 67

Report Export Collapse

Note: This question-is part of a series of questions that present the same scenario. Each question-in the series contains a unique solution that might meet the stated goals. Some question-sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question-in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage account that contains 100 GB of files. The files contain rows of text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB.

You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse Analytics.

You need to prepare the files to ensure that the data copies quickly.

Solution: You modify the files to ensure that each row is more than 1 MB.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:


asked 02/10/2024
Vagner Nicodemo
36 questions

Question 68

Report Export Collapse

You build a data warehouse in an Azure Synapse Analytics dedicated SQL pool. Analysts write a complex SELECT query that contains multiple JOIN and CASE statements to transform data for use in inventory reports. The inventory reports will use the data and additional WHERE parameters depending on the report. The reports will be produced once daily.

You need to implement a solution to make the dataset available for the reports. The solution must minimize query times. What should you implement?

Become a Premium Member for full access
  Unlock Premium Member

Question 69

Report Export Collapse

You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark pool named Pool1. You plan to create a database named DB1 in Pool1.

You need to ensure that when tables are created in DB1, the tables are available automatically as external tables to the built-in serverless SQL pool. Which format should you use for the tables in DB1?

Become a Premium Member for full access
  Unlock Premium Member

Question 70

Report Export Collapse

You are planning a solution to aggregate streaming data that originates in Apache Kafka and is output to Azure Data Lake Storage Gen2. The developers who will implement the stream processing solution use Java. Which service should you recommend using to process the streaming data?

Become a Premium Member for full access
  Unlock Premium Member
Total 341 questions
Go to page: of 35
Search

Related questions