List of questions
Related questions
Question 38 - DEA-C01 discussion
A company is planning to use a provisioned Amazon EMR cluster that runs Apache Spark jobs to perform big data analysis. The company requires high reliability. A big data team must follow best practices for running cost-optimized and long-running workloads on Amazon EMR. The team must find a solution that will maintain the company's current level of performance.
Which combination of resources will meet these requirements MOST cost-effectively? (Choose two.)
Use Hadoop Distributed File System (HDFS) as a persistent data store.
Use Amazon S3 as a persistent data store.
Use x86-based instances for core nodes and task nodes.
Use Graviton instances for core nodes and task nodes.
Use Spot Instances for all primary nodes.
0 comments
Leave a comment first