ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 63 - DAS-C01 discussion

Report
Export

A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company’s business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics.

The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team’s goals with the least operational overhead. Which solution meets these requirements?

A.
Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process.Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data.
Answers
A.
Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process.Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data.
B.
Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS forPostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data.
Answers
B.
Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS forPostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data.
C.
Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an AmazonRedshift cluster and use Amazon Redshift Spectrum to query the data.
Answers
C.
Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an AmazonRedshift cluster and use Amazon Redshift Spectrum to query the data.
D.
Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena toquery the data.
Answers
D.
Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena toquery the data.
Suggested answer: B
asked 16/09/2024
Sam Poon
40 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first