List of questions
Related questions
Question 78 - DP-203 discussion
You are implementing a batch dataset in the Parquet format. Data files will be produced be using Azure Data Factory and stored in Azure Data Lake Storage Gen2. The files will be consumed by an Azure Synapse Analytics serverless SQL pool. You need to minimize storage costs for the solution.
What should you do?
A.
Use Snappy compression for files.
B.
Use OPENROWSET to query the Parquet files.
C.
Create an external table that contains a subset of columns from the Parquet files.
D.
Store all data as string in the Parquet files.
Your answer:
0 comments
Sorted by
Leave a comment first