ExamGecko
Home / Amazon / BDS-C00 / List of questions
Ask Question

Amazon BDS-C00 Practice Test - Questions Answers, Page 2

Add to Whishlist

List of questions

Question 11

Report Export Collapse

An administrator needs to manage a large catalog of items from various external sellers. The administrator needs to determine if the items should be identified as minimally dangerous, dangerous, or highly dangerous based on their textual descriptions. The administrator already has some items with the danger attribute, but receives hundreds of new item descriptions every day without such classification.

The administrator has a system that captures dangerous goods reports from customer support team or from user feedback. What is a cost-effective architecture to solve this issue?

Build a set of regular expression rules that are based on the existing examples, and run them on the DynamoDB Streams as every new item description isadded to the system.
Build a set of regular expression rules that are based on the existing examples, and run them on the DynamoDB Streams as every new item description isadded to the system.
Build a Kinesis Streams process that captures and marks the relevant items in the dangerous goods reports using a Lambda function once more than tworeports have been filed.
Build a Kinesis Streams process that captures and marks the relevant items in the dangerous goods reports using a Lambda function once more than tworeports have been filed.
Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB Streams as every new item description is added to thesystem.
Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB Streams as every new item description is added to thesystem.
Build a machine learning model with binary classification for dangerous goods and run it on the DynamoDB Streams as every new item description is addedto the system.
Build a machine learning model with binary classification for dangerous goods and run it on the DynamoDB Streams as every new item description is addedto the system.
Suggested answer: C
asked 16/09/2024
Henock Asmerom
45 questions

Question 12

Report Export Collapse

A company receives data sets coming from external providers on Amazon S3. Data sets from different providers are dependent on one another. Data sets will arrive at different times and in no particular order.

A data architect needs to design a solution that enables the company to do the following:

Rapidly perform cross data set analysis as soon as the data becomes available Manage dependencies between data sets that arrive at different times Which architecture strategy offers a scalable and cost-effective solution that meets these requirements?

Maintain data dependency information in Amazon RDS for MySQL. Use an AWS Data Pipeline job to load an Amazon EMR Hive table based on taskdependencies and event notification triggers in Amazon S3.
Maintain data dependency information in Amazon RDS for MySQL. Use an AWS Data Pipeline job to load an Amazon EMR Hive table based on taskdependencies and event notification triggers in Amazon S3.
Maintain data dependency information in an Amazon DynamoDB table. Use Amazon SNS and event notifications to publish data to fleet of Amazon EC2workers. Once the task dependencies have been resolved, process the data withAmazon EMR.
Maintain data dependency information in an Amazon DynamoDB table. Use Amazon SNS and event notifications to publish data to fleet of Amazon EC2workers. Once the task dependencies have been resolved, process the data withAmazon EMR.
Maintain data dependency information in an Amazon ElastiCache Redis cluster. Use Amazon S3 event notifications to trigger an AWS Lambda function thatmaps the S3 object to Redis. Once the task dependencies have been resolved,process the data with Amazon EMR.
Maintain data dependency information in an Amazon ElastiCache Redis cluster. Use Amazon S3 event notifications to trigger an AWS Lambda function thatmaps the S3 object to Redis. Once the task dependencies have been resolved,process the data with Amazon EMR.
Maintain data dependency information in an Amazon DynamoDB table. Use Amazon S3 event notifications to trigger an AWS Lambda function that mapsthe S3 object to the task associated with it in DynamoDB. Once all taskdependencies have been resolved, process the data with Amazon EMR.
Maintain data dependency information in an Amazon DynamoDB table. Use Amazon S3 event notifications to trigger an AWS Lambda function that mapsthe S3 object to the task associated with it in DynamoDB. Once all taskdependencies have been resolved, process the data with Amazon EMR.
Suggested answer: C
asked 16/09/2024
Juan Contreras
39 questions

Question 13

Report Export Collapse

A media advertising company handles a large number of real-time messages sourced from over 200 websites in real time. Processing latency must be kept low. Based on calculations, a 60-shard Amazon Kinesis stream is more than sufficient to handle the maximum data throughput, even with traffic spikes. The company also uses an Amazon Kinesis Client Library (KCL) application running on Amazon Elastic Compute Cloud (EC2) managed by an Auto Scaling group. Amazon CloudWatch indicates an average of 25% CPU and a modest level of network traffic across all running servers.

The company reports a 150% to 200% increase in latency of processing messages from Amazon Kinesis during peak times. There are NO reports of delay from the sites publishing to Amazon Kinesis.

What is the appropriate solution to address the latency?

Increase the number of shards in the Amazon Kinesis stream to 80 for greater concurrency.
Increase the number of shards in the Amazon Kinesis stream to 80 for greater concurrency.
Increase the size of the Amazon EC2 instances to increase network throughput.
Increase the size of the Amazon EC2 instances to increase network throughput.
Increase the minimum number of instances in the Auto Scaling group.
Increase the minimum number of instances in the Auto Scaling group.
Increase Amazon DynamoDB throughput on the checkpoint table.
Increase Amazon DynamoDB throughput on the checkpoint table.
Suggested answer: D
asked 16/09/2024
Tracy Nicholas
37 questions

Question 14

Report Export Collapse

A Redshift data warehouse has different user teams that need to query the same table with very different query types. These user teams are experiencing poor performance.

Which action improves performance for the user teams in this situation?

Create custom table views.
Create custom table views.
Add interleaved sort keys per team.
Add interleaved sort keys per team.
Maintain team-specific copies of the table.
Maintain team-specific copies of the table.
Add support for workload management queue hopping.
Add support for workload management queue hopping.
Suggested answer: D
Explanation:

Reference: https://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.html

asked 16/09/2024
Saran Lertrat
30 questions

Question 15

Report Export Collapse

A company operates an international business served from a single AWS region. The company wants to expand into a new country. The regulator for that country requires the Data Architect to maintain a log of financial transactions in the country within 24 hours of the product transaction. The production application is latency insensitive. The new country contains another AWS region. What is the most cost-effective way to meet this requirement?

Use CloudFormation to replicate the production application to the new region.
Use CloudFormation to replicate the production application to the new region.
Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement.
Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement.
Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator.
Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator.
Use Amazon S3 cross-region replication to copy and persist production transaction logs to a bucket in the new country's region.
Use Amazon S3 cross-region replication to copy and persist production transaction logs to a bucket in the new country's region.
Suggested answer: B
asked 16/09/2024
Neville Raposo
40 questions

Question 16

Report Export Collapse

An administrator needs to design the event log storage architecture for events from mobile devices. The event data will be processed by an Amazon EMR cluster daily for aggregated reporting and analytics before being archived. How should the administrator recommend storing the log data?

Create an Amazon S3 bucket and write log data into folders by device. Execute the EMR job on the device folders.
Create an Amazon S3 bucket and write log data into folders by device. Execute the EMR job on the device folders.
Create an Amazon DynamoDB table partitioned on the device and sorted on date, write log data to table. Execute the EMR job on the Amazon DynamoDBtable.
Create an Amazon DynamoDB table partitioned on the device and sorted on date, write log data to table. Execute the EMR job on the Amazon DynamoDBtable.
Create an Amazon S3 bucket and write data into folders by day. Execute the EMR job on the daily folder.
Create an Amazon S3 bucket and write data into folders by day. Execute the EMR job on the daily folder.
Create an Amazon DynamoDB table partitioned on EventID, write log data to table. Execute the EMR job on the table.
Create an Amazon DynamoDB table partitioned on EventID, write log data to table. Execute the EMR job on the table.
Suggested answer: A
asked 16/09/2024
Victor Silveira
31 questions

Question 17

Report Export Collapse

A data engineer wants to use an Amazon Elastic Map Reduce for an application. The data engineer needs to make sure it complies with regulatory requirements.

The auditor must be able to confirm at any point which servers are running and which network access controls are deployed.

Which action should the data engineer take to meet this requirement?

Become a Premium Member for full access
  Unlock Premium Member

Question 18

Report Export Collapse

A social media customer has data from different data sources including RDS running MySQL, Redshift, and Hive on EMR. To support better analysis, the customer needs to be able to analyze data from different data sources and to combine the results. What is the most cost-effective solution to meet these requirements?

Become a Premium Member for full access
  Unlock Premium Member

Question 19

Report Export Collapse

An Amazon EMR cluster using EMRFS has access to petabytes of data on Amazon S3, originating from multiple unique data sources. The customer needs to query common fields across some of the data sets to be able to perform interactive joins and then display results quickly. Which technology is most appropriate to enable this capability?

Become a Premium Member for full access
  Unlock Premium Member

Question 20

Report Export Collapse

A game company needs to properly scale its game application, which is backed by DynamoDB. Amazon Redshift has the past two years of historical data. Game traffic varies throughout the year based on various factors such as season, movie release, and holiday season. An administrator needs to calculate how much read and write throughput should be provisioned for DynamoDB table for each week in advance. How should the administrator accomplish this task?

Become a Premium Member for full access
  Unlock Premium Member
Total 85 questions
Go to page: of 9
Search

Related questions