ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 156 - SAA-C03 discussion

Report
Export

A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business intelligence tool to show key performance indicators (KPIs).

Which combination of steps will meet these requirements with the LEAST operational overhead?

(Choose two.)

A.
Use Amazon Athena foe one-time queries Use Amazon QuickSight to create dashboards for KPIs
Answers
A.
Use Amazon Athena foe one-time queries Use Amazon QuickSight to create dashboards for KPIs
B.
Use Amazon Kinesis Data Analytics for one-time queries Use Amazon QuickSight to create dashboards for KPIs
Answers
B.
Use Amazon Kinesis Data Analytics for one-time queries Use Amazon QuickSight to create dashboards for KPIs
C.
Create custom AWS Lambda functions to move the individual records from me databases to an Amazon Redshift duster
Answers
C.
Create custom AWS Lambda functions to move the individual records from me databases to an Amazon Redshift duster
D.
Use an AWS Glue extract transform, and toad (ETL) job to convert the data into JSON format Load the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) dusters
Answers
D.
Use an AWS Glue extract transform, and toad (ETL) job to convert the data into JSON format Load the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) dusters
E.
Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake Use AWS Glue to crawl the source extract the data and load the data into Amazon S3 in Apache Parquet format
Answers
E.
Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake Use AWS Glue to crawl the source extract the data and load the data into Amazon S3 in Apache Parquet format
Suggested answer: A, E

Explanation:


asked 16/09/2024
Meriem Jlassi
36 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first