ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.

Which solution meets this requirement with the LEAST amount of effort?

A.
Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.
A.
Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.
Answers
B.
Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.
B.
Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.
Answers
C.
Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.
C.
Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.
Answers
D.
Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.
D.
Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.
Answers
Suggested answer: C

Explanation:


For details about using Aurora and Amazon Comprehend together, see Using Amazon Comprehend for sentiment detection. Aurora machine learning uses a highly optimized integration between the Aurora database and the AWS machine learning (ML) services SageMaker and Amazon Comprehend.

https://www.stackovercloud.com/2019/11/27/new-for-amazon-aurora-use-machine-learningdirectly-from-your-databases/

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

A.
Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.
A.
Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.
Answers
B.
Configure an Amazon Aurora global database and add a different AWS Region.
B.
Configure an Amazon Aurora global database and add a different AWS Region.
Answers
C.
Configure a binlog and create a replica in a different AWS Region.
C.
Configure a binlog and create a replica in a different AWS Region.
Answers
D.
Configure a cross-Region read replica.
D.
Configure a cross-Region read replica.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-databasedisaster-recovery.html

https://aws.amazon.com/blogs/database/how-to-choose-the-best-disaster-recovery-option-foryour-amazon-aurora-mysql-cluster/

https://aws.amazon.com/about-aws/whats-new/2019/11/aurora-supports-in-place-conversion-toglobal-database/

For the first time, a database professional is establishing a test graph database on Amazon Neptune. Thedatabase expert must input millions of rows of test observations from an Amazon S3.csv file. Thedatabase professional uploaded the data to the Neptune DB instance through a series of API calls.Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

A.
Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
A.
Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
Answers
B.
Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
B.
Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
Answers
C.
Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
C.
Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
Answers
D.
Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
D.
Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
Answers
E.
Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
E.
Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
Answers
F.
Create an S3 VPC endpoint and issue an HTTP POST to the database ?€™s loader endpoint.
F.
Create an S3 VPC endpoint and issue an HTTP POST to the database ?€™s loader endpoint.
Answers
Suggested answer: B, E, F

Explanation:


https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-optimize.html

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

A.
Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.
A.
Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.
Answers
B.
Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.
B.
Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.
Answers
C.
Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
C.
Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
Answers
D.
Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.
D.
Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.Overview.html

A corporation is transitioning from an IBM Informix database to an Amazon RDS for SQL Server Multi- AZ implementation with Always On Availability Groups (AGs). SQL Server Agent tasks are scheduled to execute at 5-minute intervals on the Always On AG listener to synchronize data between the Informix and SQL Server databases. After a successful failover to the backup node with minimum delay, users endure hours of stale data.

How can a database professional guarantee that consumers view the most current data after a failover?

A.
Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.
A.
Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.
Answers
B.
Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.
B.
Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.
Answers
C.
Set the databases on the secondary node to read-only mode.
C.
Set the databases on the secondary node to read-only mode.
Answers
D.
Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.
D.
Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.
Answers
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerMultiAZ.html

If you have SQL Server Agent jobs, recreate them on the secondary. You do so because these jobs are stored in the msdb database, and you can't replicate this database by using Database Mirroring (DBM) or Always On Availability Groups (AGs). Create the jobs first in the original primary, then fail over, and create the same jobs in the new primary.

A business is transferring a database from one AWS Region to another using an Amazon RDS for SQL Server DB instance. The organization wishes to keep database downtime to a minimum throughout the transfer.

Which migration strategy should the organization use for this cross-regional move?

A.
Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
A.
Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
Answers
B.
Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.
B.
Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.
Answers
C.
Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.
C.
Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.
Answers
D.
Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.
D.
Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.XRgn.html

With Amazon RDS, you can create a MariaDB, MySQL, Oracle, or PostgreSQL read replica in a different AWS Region from the source DB instance. Creating a cross-Region read replica isn't supported for SQL Server on Amazon RDS.

A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.

The program has corrupted the database logically, resulting in the application being unresponsive.

The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.

How should a database professional restore a database to its previous state prior to corruption?

A.
Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.
A.
Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.
Answers
B.
Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.
B.
Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.
Answers
C.
Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.
C.
Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.
Answers
D.
Restore using the appropriate automated backup. No changes to the application connection string are required.
D.
Restore using the appropriate automated backup. No changes to the application connection string are required.
Answers
Suggested answer: B

Explanation:


When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint (the old DB Instance can be deleted if so desired). This is done to enable you to create multiple DB Instances from a specific DB Snapshot or point in time."

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

Partition key: game name Sort key: event identifier Local secondary index: player identifier Event time In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

A.
Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
A.
Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
Answers
B.
Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
B.
Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
Answers
C.
Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
C.
Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
Answers
D.
Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
D.
Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
Answers
Suggested answer: D

Explanation:


A business's mission-critical production workload is being operated on a 500 GB Amazon Aurora MySQL DB cluster. A database engineer must migrate the workload without causing data loss to a new Amazon Aurora Serverless MySQL DB cluster.

Which approach will result in the LEAST amount of downtime and the LEAST amount of application impact?

A.
Modify the existing DB cluster and update the Aurora configuration to Serverless.
A.
Modify the existing DB cluster and update the Aurora configuration to Serverless.
Answers
B.
Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.
B.
Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.
Answers
C.
Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.
C.
Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.
Answers
D.
Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.
D.
Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.
Answers
Suggested answer: D

Explanation:


https://medium.com/@souri29/how-to-migrate-from-amazon-rds-aurora-or-mysql-to-amazonaurora-serverless-55f9a4a74078

A database professional is developing an application that will respond to single-instance requests.

The program will query large amounts of client data and offer end users with results.

These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.

During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.

Which approach will be the most cost-effective in meeting these requirements?

A.
Amazon DynamoDB with provisioned capacity mode and auto scaling
A.
Amazon DynamoDB with provisioned capacity mode and auto scaling
Answers
B.
Amazon DynamoDB with on-demand capacity mode
B.
Amazon DynamoDB with on-demand capacity mode
Answers
C.
Amazon Aurora with auto scaling enabled
C.
Amazon Aurora with auto scaling enabled
Answers
D.
Amazon Aurora in a serverless mode
D.
Amazon Aurora in a serverless mode
Answers
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items

Total 321 questions
Go to page: of 33