List of questions
Related questions
Question 163 - Professional Data Engineer discussion
You've migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of many shuffing operations and initial data are parquet files (on average 200-400 MB size each). You see some degradation in performance after the migration to Dataproc, so you'd like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you'd like to continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload.
What should you do?
A.
Increase the size of your parquet files to ensure them to be 1 GB minimum.
B.
Switch to TFRecords formats (appr. 200MB per file) instead of parquet files.
C.
Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS.
D.
Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size.
Your answer:
0 comments
Sorted by
Leave a comment first