ExamGecko
Home Home / Amazon / BDS-C00

Amazon BDS-C00 Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











A medical record filing system for a government medical fund is using an Amazon S3 bucket to archive documents related to patients. Every patient visit to a physician creates a new file, which can add up millions of files each month.

Collection of these files from each physician is handled via a batch process that runs everу night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.

Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket for a given date, patient, or physician. Auditors spend a significant amount of time location such files. What is the most cost- and time-efficient collection methodology in this situation?

A.
Use Amazon Kinesis to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and thenstore them in Amazon S3 with folders separated per physician.
A.
Use Amazon Kinesis to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and thenstore them in Amazon S3 with folders separated per physician.
Answers
B.
Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), andthen store them in Amazon S3 with folders separated per physician.
B.
Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), andthen store them in Amazon S3 with folders separated per physician.
Answers
C.
Use Amazon S3 event notification to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them basedon the month and year of the file.
C.
Use Amazon S3 event notification to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them basedon the month and year of the file.
Answers
D.
Use Amazon S3 event notification to populate an Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based onthe month and year of the file.
D.
Use Amazon S3 event notification to populate an Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based onthe month and year of the file.
Answers
Suggested answer: A

A clinical trial will rely on medical sensors to remotely assess patient health. Each physician who participates in the trial requires visual reports each morning. The reports are built from aggregations of all the sensor data taken each minute. What is the most cost-effective solution for creating this visualization each day?

A.
Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data eachmorning for the physician to review.
A.
Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data eachmorning for the physician to review.
Answers
B.
Use a transient EMR cluster that shuts down after use to aggregate the sensor data each night and generate a QuickSight visualization on the new data eachmorning for the physician to review.
B.
Use a transient EMR cluster that shuts down after use to aggregate the sensor data each night and generate a QuickSight visualization on the new data eachmorning for the physician to review.
Answers
C.
Use Spark streaming on EMR to aggregate the patient sensor data in every 15 minutes and generate a QuickSight visualization on the new data eachmorning for the physician to review.
C.
Use Spark streaming on EMR to aggregate the patient sensor data in every 15 minutes and generate a QuickSight visualization on the new data eachmorning for the physician to review.
Answers
D.
Use an EMR cluster to aggregate the patient sensor data each night and provide Zeppelin notebooks that look at the new data residing on the cluster eachmorning for the physician to review.
D.
Use an EMR cluster to aggregate the patient sensor data each night and provide Zeppelin notebooks that look at the new data residing on the cluster eachmorning for the physician to review.
Answers
Suggested answer: D

A company uses Amazon Redshift for its enterprise data warehouse. A new on-premises PostgreSQL OLTP DB must be integrated into the data warehouse. Each table in the PostgreSQL DB has an indexed last_modified timestamp column. The data warehouse has a staging layer to load source data into the data warehouse environment for further processing.

The data lag between the source PostgreSQL DB and the Amazon Redshift staging layer should NOT exceed four hours. What is the most efficient technique to meet these requirements?

A.
Create a DBLINK on the source DB to connect to Amazon Redshift. Use a PostgreSQL trigger on the source table to capture the new insert/update/deleteevent and execute the event on the Amazon Redshift staging table.
A.
Create a DBLINK on the source DB to connect to Amazon Redshift. Use a PostgreSQL trigger on the source table to capture the new insert/update/deleteevent and execute the event on the Amazon Redshift staging table.
Answers
B.
Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and write it to Amazon Kinesis Streams. Use a KCL applicationto execute the event on the Amazon Redshift staging table.
B.
Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and write it to Amazon Kinesis Streams. Use a KCL applicationto execute the event on the Amazon Redshift staging table.
Answers
C.
Extract the incremental changes periodically using a SQL query. Upload the changes to multiple Amazon Simple Storage Service (S3) objects, and run theCOPY command to load to the Amazon Redshift staging layer.
C.
Extract the incremental changes periodically using a SQL query. Upload the changes to multiple Amazon Simple Storage Service (S3) objects, and run theCOPY command to load to the Amazon Redshift staging layer.
Answers
D.
Extract the incremental changes periodically using a SQL query. Upload the changes to a single Amazon Simple Storage Service (S3) object, and run theCOPY command to load to the Amazon Redshift staging layer.
D.
Extract the incremental changes periodically using a SQL query. Upload the changes to a single Amazon Simple Storage Service (S3) object, and run theCOPY command to load to the Amazon Redshift staging layer.
Answers
Suggested answer: C

An administrator is deploying Spark on Amazon EMR for two distinct use cases: machine learning algorithms and ad-hoc querying. All data will be stored in Amazon S3. Two separate clusters for each use case will be deployed. The data volumes on Amazon S3 are less than 10 GB. How should the administrator align instance types with the cluster's purpose?

A.
Machine Learning on C instance types and ad-hoc queries on R instance types
A.
Machine Learning on C instance types and ad-hoc queries on R instance types
Answers
B.
Machine Learning on R instance types and ad-hoc queries on G2 instance types
B.
Machine Learning on R instance types and ad-hoc queries on G2 instance types
Answers
C.
Machine Learning on T instance types and ad-hoc queries on M instance types
C.
Machine Learning on T instance types and ad-hoc queries on M instance types
Answers
D.
Machine Learning on D instance types and ad-hoc queries on I instance types
D.
Machine Learning on D instance types and ad-hoc queries on I instance types
Answers
Suggested answer: A

An organization is designing an application architecture. The application will have over 100 TB of data and will support transactions that arrive at rates from hundreds per second to tens of thousands per second, depending on the day of the week and time of day. All transaction data, must be durably and reliably stored. Certain read operations must be performed with strong consistency. Which solution meets these requirements?

A.
Use Amazon DynamoDB as the data store and use strongly consistent reads when necessary.
A.
Use Amazon DynamoDB as the data store and use strongly consistent reads when necessary.
Answers
B.
Use an Amazon Relational Database Service (RDS) instance sized to meet the maximum anticipated transaction rate and with the High Availability optionenabled.
B.
Use an Amazon Relational Database Service (RDS) instance sized to meet the maximum anticipated transaction rate and with the High Availability optionenabled.
Answers
C.
Deploy a NoSQL data store on top of an Amazon Elastic MapReduce (EMR) cluster, and select the HDFS High Durability option.
C.
Deploy a NoSQL data store on top of an Amazon Elastic MapReduce (EMR) cluster, and select the HDFS High Durability option.
Answers
D.
Use Amazon Redshift with synchronous replication to Amazon Simple Storage Service (S3) and row-level locking for strong consistency.
D.
Use Amazon Redshift with synchronous replication to Amazon Simple Storage Service (S3) and row-level locking for strong consistency.
Answers
Suggested answer: A

A company generates a large number of files each month and needs to use AWS import/export to move these files into Amazon S3 storage. To satisfy the auditors, the company needs to keep a record of which files were imported into Amazon S3. What is a low-cost way to create a unique log for each import job?

A.
Use the same log file prefix in the import/export manifest files to create a versioned log file in Amazon S3 for all imports.
A.
Use the same log file prefix in the import/export manifest files to create a versioned log file in Amazon S3 for all imports.
Answers
B.
Use the log file prefix in the import/export manifest files to create a unique log file in Amazon S3 for each import.
B.
Use the log file prefix in the import/export manifest files to create a unique log file in Amazon S3 for each import.
Answers
C.
Use the log file checksum in the import/export manifest files to create a unique log file in Amazon S3 for each import.
C.
Use the log file checksum in the import/export manifest files to create a unique log file in Amazon S3 for each import.
Answers
D.
Use a script to iterate over files in Amazon S3 to generate a log after each import/export job.
D.
Use a script to iterate over files in Amazon S3 to generate a log after each import/export job.
Answers
Suggested answer: B

A company needs a churn prevention model to predict which customers will NOT renew their yearly subscription to the company's service. The company plans to provide these customers with a promotional offer. A binary classification model that uses Amazon Machine Learning is required. On which basis should this binary classification model be built?

A.
User profiles (age, gender, income, occupation)
A.
User profiles (age, gender, income, occupation)
Answers
B.
Last user session
B.
Last user session
Answers
C.
Each user time series events in the past 3 months
C.
Each user time series events in the past 3 months
Answers
D.
Quarterly results
D.
Quarterly results
Answers
Suggested answer: C

A company with a support organization needs support engineers to be able to search historic cases to provide fast responses on new issues raised. The company has forwarded all support messages into an Amazon Kinesis Stream. This meets a company objective of using only managed services to reduce operational overhead.

The company needs an appropriate architecture that allows support engineers to search on historic cases and find similar issues and their associated responses. Which AWS Lambda action is most appropriate?

A.
Ingest and index the content into an Amazon Elasticsearch domain.
A.
Ingest and index the content into an Amazon Elasticsearch domain.
Answers
B.
Stem and tokenize the input and store the results into Amazon ElastiCache.
B.
Stem and tokenize the input and store the results into Amazon ElastiCache.
Answers
C.
Write data as JSON into Amazon DynamoDB with primary and secondary indexes.
C.
Write data as JSON into Amazon DynamoDB with primary and secondary indexes.
Answers
D.
Aggregate feedback in Amazon S3 using a columnar format with partitioning.
D.
Aggregate feedback in Amazon S3 using a columnar format with partitioning.
Answers
Suggested answer: A

A solutions architect works for a company that has a data lake based on a central Amazon S3 bucket. The data contains sensitive information. The architect must be able to specify exactly which files each user can access. Users access the platform through a SAML federation Single Sign On platform. The architect needs to build a solution that allows fine grained access control, traceability of access to the objects, and usage of the standard tools (AWS Console, AWS CLI) to access the data.

Which solution should the architect build?

A.
Use Amazon S3 Server-Side Encryption with AWS KMS-Managed Keys for storing data. Use AWS KMS Grants to allow access to specific elements of theplatform. Use AWS CloudTrail for auditing.
A.
Use Amazon S3 Server-Side Encryption with AWS KMS-Managed Keys for storing data. Use AWS KMS Grants to allow access to specific elements of theplatform. Use AWS CloudTrail for auditing.
Answers
B.
Use Amazon S3 Server-Side Encryption with Amazon S3-Managed Keys. Set Amazon S3 ACLs to allow access to specific elements of the platform. UseAmazon S3 to access logs for auditing.
B.
Use Amazon S3 Server-Side Encryption with Amazon S3-Managed Keys. Set Amazon S3 ACLs to allow access to specific elements of the platform. UseAmazon S3 to access logs for auditing.
Answers
C.
Use Amazon S3 Client-Side Encryption with Client-Side Master Key. Set Amazon S3 ACLs to allow access to specific elements of the platform. Use AmazonS3 to access logs for auditing.
C.
Use Amazon S3 Client-Side Encryption with Client-Side Master Key. Set Amazon S3 ACLs to allow access to specific elements of the platform. Use AmazonS3 to access logs for auditing.
Answers
D.
Use Amazon S3 Client-Side Encryption with AWS KMS-Managed Keys for storing data. Use AWS KMS Grants to allow access to specific elements of theplatform. Use AWS CloudTrail for auditing.
D.
Use Amazon S3 Client-Side Encryption with AWS KMS-Managed Keys for storing data. Use AWS KMS Grants to allow access to specific elements of theplatform. Use AWS CloudTrail for auditing.
Answers
Suggested answer: D

A company that provides economics data dashboards needs to be able to develop software to display rich, interactive, data-driven graphics that run in web browsers and leverages the full stack of web standards (HTML, SVG, and CSS). Which technology provides the most appropriate support for this requirements?

A.
D3.js
A.
D3.js
Answers
B.
IPython/Jupyter
B.
IPython/Jupyter
Answers
C.
R Studio
C.
R Studio
Answers
D.
Hue
D.
Hue
Answers
Suggested answer: A

Explanation:

Reference: https://sa.udacity.com/course/data-visualization-and-d3js--ud507

Total 85 questions
Go to page: of 9