ExamGecko
Home Home / Amazon / BDS-C00

Amazon BDS-C00 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











An organization needs a data store to handle the following data types and access patterns: Faceting Search

Flexible schema (JSON) and fixed schema Noise word elimination Which data store should the organization choose?

A.
Amazon Relational Database Service (RDS)
A.
Amazon Relational Database Service (RDS)
Answers
B.
Amazon Redshift
B.
Amazon Redshift
Answers
C.
Amazon DynamoDB
C.
Amazon DynamoDB
Answers
D.
Amazon Elasticsearch Service
D.
Amazon Elasticsearch Service
Answers
Suggested answer: C

A travel website needs to present a graphical quantitative summary of its daily bookings to website visitors for marketing purposes. The website has millions of visitors per day, but wants to control costs by implementing the least-expensive solution for this visualization. What is the most cost-effective solution?

A.
Generate a static graph with a transient EMR cluster daily, and store in an Amazon S3.
A.
Generate a static graph with a transient EMR cluster daily, and store in an Amazon S3.
Answers
B.
Generate a graph using MicroStrategy backed by a transient EMR cluster.
B.
Generate a graph using MicroStrategy backed by a transient EMR cluster.
Answers
C.
Implement a Jupyter front-end provided by a continuously running EMR cluster leveraging spot instances for task nodes.
C.
Implement a Jupyter front-end provided by a continuously running EMR cluster leveraging spot instances for task nodes.
Answers
D.
Implement a Zeppelin application that runs on a long-running EMR cluster.
D.
Implement a Zeppelin application that runs on a long-running EMR cluster.
Answers
Suggested answer: A

A system engineer for a company proposes digitalization and backup of large archives for customers. The systems engineer needs to provide users with a secure storage that makes sure that data will never be tampered with once it has been uploaded. How should this be accomplished?

A.
Create an Amazon Glacier Vault. Specify a "Deny" Vault Lock policy on this Vault to block "glacier:DeleteArchive".
A.
Create an Amazon Glacier Vault. Specify a "Deny" Vault Lock policy on this Vault to block "glacier:DeleteArchive".
Answers
B.
Create an Amazon S3 bucket. Specify a "Deny" bucket policy on this bucket to block "s3:DeleteObject".
B.
Create an Amazon S3 bucket. Specify a "Deny" bucket policy on this bucket to block "s3:DeleteObject".
Answers
C.
Create an Amazon Glacier Vault. Specify a "Deny" vault access policy on this Vault to block "glacier:DeleteArchive".
C.
Create an Amazon Glacier Vault. Specify a "Deny" vault access policy on this Vault to block "glacier:DeleteArchive".
Answers
D.
Create secondary AWS Account containing an Amazon S3 bucket. Grant "s3:PutObject" to the primary account.
D.
Create secondary AWS Account containing an Amazon S3 bucket. Grant "s3:PutObject" to the primary account.
Answers
Suggested answer: C

Explanation:

Reference: https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock-policy.html

An organization needs to design and deploy a large-scale data storage solution that will be highly durable and highly flexible with respect to the type and structure of data being stored. The data to be stored will be sent or generated from a variety of sources and must be persistently available for access and processing by multiple applications.

What is the most cost-effective technique to meet these requirements?

A.
Use Amazon Simple Storage Service (S3) as the actual data storage system, coupled with appropriate tools for ingestion/acquisition of data and forsubsequent processing and querying.
A.
Use Amazon Simple Storage Service (S3) as the actual data storage system, coupled with appropriate tools for ingestion/acquisition of data and forsubsequent processing and querying.
Answers
B.
Deploy a long-running Amazon Elastic MapReduce (EMR) cluster with Amazon Elastic Block Store (EBS) volumes for persistent HDFS storage andappropriate Hadoop ecosystem tools for processing and querying.
B.
Deploy a long-running Amazon Elastic MapReduce (EMR) cluster with Amazon Elastic Block Store (EBS) volumes for persistent HDFS storage andappropriate Hadoop ecosystem tools for processing and querying.
Answers
C.
Use Amazon Redshift with data replication to Amazon Simple Storage Service (S3) for comprehensive durable data storage, processing, and querying.
C.
Use Amazon Redshift with data replication to Amazon Simple Storage Service (S3) for comprehensive durable data storage, processing, and querying.
Answers
D.
Launch an Amazon Relational Database Service (RDS), and use the enterprise grade and capacity of the Amazon Aurora engine for storage, processing, andquerying.
D.
Launch an Amazon Relational Database Service (RDS), and use the enterprise grade and capacity of the Amazon Aurora engine for storage, processing, andquerying.
Answers
Suggested answer: C

A customer has a machine learning workflow that consists of multiple quick cycles of reads-writes-reads on Amazon S3. The customer needs to run the workflow on EMR but is concerned that the reads in subsequent cycles will miss new data critical to the machine learning from the prior cycles. How should the customer accomplish this?

A.
Turn on EMRFS consistent view when configuring the EMR cluster.
A.
Turn on EMRFS consistent view when configuring the EMR cluster.
Answers
B.
Use AWS Data Pipeline to orchestrate the data processing cycles.
B.
Use AWS Data Pipeline to orchestrate the data processing cycles.
Answers
C.
Set hadoop.data.consistency = true in the core-site.xml file.
C.
Set hadoop.data.consistency = true in the core-site.xml file.
Answers
D.
Set hadoop.s3.consistency = true in the core-site.xml file.
D.
Set hadoop.s3.consistency = true in the core-site.xml file.
Answers
Suggested answer: A

An Amazon Redshift Database is encrypted using KMS. A data engineer needs to use the AWS CLI to create a KMS encrypted snapshot of the database in another AWS region. Which three steps should the data engineer take to accomplish this task? (Choose three.)

A.
Create a new KMS key in the destination region.
A.
Create a new KMS key in the destination region.
Answers
B.
Copy the existing KMS key to the destination region.
B.
Copy the existing KMS key to the destination region.
Answers
C.
Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the source region.
C.
Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key from the source region.
Answers
D.
In the source region, enable cross-region replication and specify the name of the copy grant created.
D.
In the source region, enable cross-region replication and specify the name of the copy grant created.
Answers
E.
In the destination region, enable cross-region replication and specify the name of the copy grant created.
E.
In the destination region, enable cross-region replication and specify the name of the copy grant created.
Answers
F.
Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key created in the destination region.
F.
Use CreateSnapshotCopyGrant to allow Amazon Redshift to use the KMS key created in the destination region.
Answers
Suggested answer: A, D, F

Managers in a company need access to the human resources database that runs on Amazon Redshift, to run reports about their employees. Managers must only see information about their direct reports.

Which technique should be used to address this requirement with Amazon Redshift?

A.
Define an IAM group for each manager with each employee as an IAM user in that group, and use that to limit the access.
A.
Define an IAM group for each manager with each employee as an IAM user in that group, and use that to limit the access.
Answers
B.
Use Amazon Redshift snapshot to create one cluster per manager. Allow the managers to access only their designated clusters.
B.
Use Amazon Redshift snapshot to create one cluster per manager. Allow the managers to access only their designated clusters.
Answers
C.
Define a key for each manager in AWS KMS and encrypt the data for their employees with their private keys.
C.
Define a key for each manager in AWS KMS and encrypt the data for their employees with their private keys.
Answers
D.
Define a view that uses the employee's manager name to filter the records based on current user names.
D.
Define a view that uses the employee's manager name to filter the records based on current user names.
Answers
Suggested answer: A

A company is building a new application in AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of a log data for at least 30 days. What is the simplest architecture that will allow the architect to analyze the logs?

A.
Write them directly to a Kinesis Firehose. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
A.
Write them directly to a Kinesis Firehose. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
Answers
B.
Write them to a file on Amazon Simple Storage Service (S3). Write an AWS Lambda function that runs in response to the S3 event to load the events intoAmazon Elasticsearch Service for analysis.
B.
Write them to a file on Amazon Simple Storage Service (S3). Write an AWS Lambda function that runs in response to the S3 event to load the events intoAmazon Elasticsearch Service for analysis.
Answers
C.
Write them to the local disk and configure the Amazon CloudWatch Logs agent to load the data into CloudWatch Logs and subsequently into AmazonElasticsearch Service.
C.
Write them to the local disk and configure the Amazon CloudWatch Logs agent to load the data into CloudWatch Logs and subsequently into AmazonElasticsearch Service.
Answers
D.
Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.
D.
Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.
Answers
Suggested answer: B

An organization uses a custom map reduce application to build monthly reports based on many small data files in an Amazon S3 bucket. The data is submitted from various business units on a frequent but unpredictable schedule. As the dataset continues to grow, it becomes increasingly difficult to process all of the data in one day. The organization has scaled up its Amazon EMR cluster, but other optimizations could improve performance.

The organization needs to improve performance with minimal changes to existing processes and applications. What action should the organization take?

A.
Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in DynamoDB.
A.
Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in DynamoDB.
Answers
B.
Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.
B.
Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.
Answers
C.
Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.
C.
Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.
Answers
D.
Schedule a daily AWS Data Pipeline process that aggregates content into larger files using S3DistCp.
D.
Schedule a daily AWS Data Pipeline process that aggregates content into larger files using S3DistCp.
Answers
E.
Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.
E.
Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.
Answers
Suggested answer: B

An administrator is processing events in near real-time using Kinesis streams and Lambda. Lambda intermittently fails to process batches from one of the shards due to a 5-munite time limit.

What is a possible solution for this problem?

A.
Add more Lambda functions to improve concurrent batch processing.
A.
Add more Lambda functions to improve concurrent batch processing.
Answers
B.
Reduce the batch size that Lambda is reading from the stream.
B.
Reduce the batch size that Lambda is reading from the stream.
Answers
C.
Ignore and skip events that are older than 5 minutes and put them to Dead Letter Queue (DLQ).
C.
Ignore and skip events that are older than 5 minutes and put them to Dead Letter Queue (DLQ).
Answers
D.
Configure Lambda to read from fewer shards in parallel.
D.
Configure Lambda to read from fewer shards in parallel.
Answers
Suggested answer: D
Total 85 questions
Go to page: of 9