ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 51 - BDS-C00 discussion

Report
Export

A medical record filing system for a government medical fund is using an Amazon S3 bucket to archive documents related to patients. Every patient visit to a physician creates a new file, which can add up millions of files each month.

Collection of these files from each physician is handled via a batch process that runs everу night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.

Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket for a given date, patient, or physician. Auditors spend a significant amount of time location such files. What is the most cost- and time-efficient collection methodology in this situation?

A.
Use Amazon Kinesis to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and thenstore them in Amazon S3 with folders separated per physician.
Answers
A.
Use Amazon Kinesis to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and thenstore them in Amazon S3 with folders separated per physician.
B.
Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), andthen store them in Amazon S3 with folders separated per physician.
Answers
B.
Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), andthen store them in Amazon S3 with folders separated per physician.
C.
Use Amazon S3 event notification to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them basedon the month and year of the file.
Answers
C.
Use Amazon S3 event notification to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them basedon the month and year of the file.
D.
Use Amazon S3 event notification to populate an Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based onthe month and year of the file.
Answers
D.
Use Amazon S3 event notification to populate an Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based onthe month and year of the file.
Suggested answer: A
asked 16/09/2024
Zeshan Tariq
40 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first