List of questions
Related questions
Question 53 - DEA-C01 discussion
A data engineer must ingest a source of structured data that is in .csv format into an Amazon S3 data lake. The .csv files contain 15 columns. Data analysts need to run Amazon Athena queries on one or two columns of the dataset. The data analysts rarely query the entire file.
Which solution will meet these requirements MOST cost-effectively?
Use an AWS Glue PySpark job to ingest the source data into the data lake in .csv format.
Create an AWS Glue extract, transform, and load (ETL) job to read from the .csv structured data source. Configure the job to ingest the data into the data lake in JSON format.
Use an AWS Glue PySpark job to ingest the source data into the data lake in Apache Avro format.
Create an AWS Glue extract, transform, and load (ETL) job to read from the .csv structured data source. Configure the job to write the data into the data lake in Apache Parquet format.
0 comments
Leave a comment first