ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 78 - DP-203 discussion

Report
Export

You are implementing a batch dataset in the Parquet format. Data files will be produced be using Azure Data Factory and stored in Azure Data Lake Storage Gen2. The files will be consumed by an Azure Synapse Analytics serverless SQL pool. You need to minimize storage costs for the solution.

What should you do?

A.
Use Snappy compression for files.
Answers
A.
Use Snappy compression for files.
B.
Use OPENROWSET to query the Parquet files.
Answers
B.
Use OPENROWSET to query the Parquet files.
C.
Create an external table that contains a subset of columns from the Parquet files.
Answers
C.
Create an external table that contains a subset of columns from the Parquet files.
D.
Store all data as string in the Parquet files.
Answers
D.
Store all data as string in the Parquet files.
Suggested answer: C

Explanation:

An external table points to data located in Hadoop, Azure Storage blob, or Azure Data Lake Storage. External tables are used to read data from files or write data to files in Azure Storage. With Synapse SQL, you can use external tables to read external data using dedicated SQL pool or serverless SQL pool.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables

asked 02/10/2024
Johannes Bickel
55 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first