List of questions
Related questions
Question 85 - DEA-C01 discussion
A data engineer needs to build an extract, transform, and load (ETL) job. The ETL job will process daily incoming .csv files that users upload to an Amazon S3 bucket. The size of each S3 object is less than 100 MB.
Which solution will meet these requirements MOST cost-effectively?
A.
Write a custom Python application. Host the application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
B.
Write a PySpark ETL script. Host the script on an Amazon EMR cluster.
C.
Write an AWS Glue PySpark job. Use Apache Spark to transform the data.
D.
Write an AWS Glue Python shell job. Use pandas to transform the data.
Your answer:
0 comments
Sorted by
Leave a comment first