ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 178 - DAS-C01 discussion

Report
Export

An ecommerce company uses Amazon Aurora PostgreSQL to process and store live transactional data and uses Amazon Redshift for its data warehouse solution. A nightly ET L job has been implemented to update the Redshift cluster with new data from the PostgreSQL database. The business has grown rapidly and so has the size and cost of the Redshift cluster. The company's data analytics team needs to create a solution to archive historical data and only keep the most recent 12 months of data in Amazon

Redshift to reduce costs. Data analysts should also be able to run analytics queries that effectively combine data from live transactional data in PostgreSQL, current data in Redshift, and archived historical data.

Which combination of tasks will meet these requirements? (Select THREE.)

A.
Configure the Amazon Redshift Federated Query feature to query live transactional data in the PostgreSQL database.
Answers
A.
Configure the Amazon Redshift Federated Query feature to query live transactional data in the PostgreSQL database.
B.
Configure Amazon Redshift Spectrum to query live transactional data in the PostgreSQL database.
Answers
B.
Configure Amazon Redshift Spectrum to query live transactional data in the PostgreSQL database.
C.
Schedule a monthly job to copy data older than 12 months to Amazon S3 by using the UNLOAD command, and then delete that data from the Redshift cluster. Configure Amazon Redshift Spectrum to access historical data in Amazon S3.
Answers
C.
Schedule a monthly job to copy data older than 12 months to Amazon S3 by using the UNLOAD command, and then delete that data from the Redshift cluster. Configure Amazon Redshift Spectrum to access historical data in Amazon S3.
D.
Schedule a monthly job to copy data older than 12 months to Amazon S3 Glacier Flexible Retrieval by using the UNLOAD command, and then delete that data from the Redshift cluster. Configure Redshift Spectrum to access historical data with S3 Glacier Flexible Retrieval.
Answers
D.
Schedule a monthly job to copy data older than 12 months to Amazon S3 Glacier Flexible Retrieval by using the UNLOAD command, and then delete that data from the Redshift cluster. Configure Redshift Spectrum to access historical data with S3 Glacier Flexible Retrieval.
E.
Create a late-binding view in Amazon Redshift that combines live, current, and historical data from different sources.
Answers
E.
Create a late-binding view in Amazon Redshift that combines live, current, and historical data from different sources.
F.
Create a materialized view in Amazon Redshift that combines live, current, and historical data from different sources.
Answers
F.
Create a materialized view in Amazon Redshift that combines live, current, and historical data from different sources.
Suggested answer: A, C, E
asked 16/09/2024
Genivaldo Costa
42 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first