ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 15

Question list
Search
Search

List of questions

Search

Related questions











A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

A.
Restart the application tool used to execute queries.
A.
Restart the application tool used to execute queries.
Answers
B.
Change to a database instance class with higher throughput.
B.
Change to a database instance class with higher throughput.
Answers
C.
Convert from Single-AZ to Multi-AZ.
C.
Convert from Single-AZ to Multi-AZ.
Answers
D.
Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
D.
Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
Answers
E.
Convert from General Purpose to Provisioned IOPS (PIOPS).
E.
Convert from General Purpose to Provisioned IOPS (PIOPS).
Answers
Suggested answer: B, E

Explanation:


https://aws.amazon.com/blogs/database/best-storage-practices-for-running-production-workloadson-hosted-databases-with-amazon-rds-or-amazon-ec2/

"If you find the pattern of IOPS usage consistently going beyond more than 16,000, you should modify the DB instance and change the storage type from gp2 to io1.

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.

What is the MOST operationally efficient solution to meet these requirements?

A.
Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.
A.
Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.
Answers
B.
Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.
B.
Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.
Answers
C.
Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.
C.
Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.
Answers
D.
Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.
D.
Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-rotationschedule.html

A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:

The networks and routes affected if a particular component fails.

The networks that have redundant routes between them.

The networks that do not have redundant routes between them. The fastest path between two networks.

Which database engine meets these requirements?

A.
Amazon Aurora MySQL
A.
Amazon Aurora MySQL
Answers
B.
Amazon Neptune
B.
Amazon Neptune
Answers
C.
Amazon ElastiCache for Redis
C.
Amazon ElastiCache for Redis
Answers
D.
Amazon DynamoDB
D.
Amazon DynamoDB
Answers
Suggested answer: B

Explanation:


An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day.

During the sale, approximately 10,000 concurrent users will look at the deals before buying items.

Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

Which solution meets these requirements?

A.
Amazon DynamoDB with on-demand capacity mode
A.
Amazon DynamoDB with on-demand capacity mode
Answers
B.
Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled
B.
Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled
Answers
C.
Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)
C.
Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)
Answers
D.
Amazon Aurora with one writer node and two cross-Region Aurora Replicas
D.
Amazon Aurora with one writer node and two cross-Region Aurora Replicas
Answers
Suggested answer: A

Explanation:


The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low ==> Setting provisioning DynamoDB fix read 5000/write 10000 with will waste the resource when the traffic is low. It is not cost-effective.

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

This application has two parts:

An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.

A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

A database specialist needs to design a cost-effective database solution to handle this workload.

Which solution meets these requirements?

A.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
A.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
Answers
B.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
B.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
Answers
C.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
C.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
Answers
D.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
D.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Answers
Suggested answer: D

Explanation:


Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.

Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.

Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)

A.
Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
A.
Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
Answers
B.
Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
B.
Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
Answers
C.
Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
C.
Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
Answers
D.
Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
D.
Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
Answers
E.
Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.
E.
Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.
Answers
Suggested answer: A, C

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/

https://aws.amazon.com/premiumsupport/knowledge-center/rds-mysql-slow-query/

A financial organization must ensure that the most current 90 days of MySQL database backups are accessible. Amazon RDS for MySQL DB instances are used to host all MySQL databases. A database expert must create a solution that satisfies the criteria for backup retention with the least amount of development work feasible.

Which strategy should the database administrator take?

A.
Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.
A.
Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.
Answers
B.
Modify the DB instances to enable the automated backup option. Select the required backup retention period.
B.
Modify the DB instances to enable the automated backup option. Select the required backup retention period.
Answers
C.
Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.
C.
Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.
Answers
D.
Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.
D.
Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

A business is operating an on-premises application that is divided into three tiers: web, application, and MySQL database. The database is predominantly accessed during business hours, with occasional bursts of activity throughout the day.

As part of the company's shift to AWS, a database expert wants to increase the availability and minimize the cost of the MySQL database tier.

Which MySQL database choice satisfies these criteria?

A.
Amazon RDS for MySQL with Multi-AZ
A.
Amazon RDS for MySQL with Multi-AZ
Answers
B.
Amazon Aurora Serverless MySQL cluster
B.
Amazon Aurora Serverless MySQL cluster
Answers
C.
Amazon Aurora MySQL cluster
C.
Amazon Aurora MySQL cluster
Answers
D.
Amazon RDS for MySQL with read replica
D.
Amazon RDS for MySQL with read replica
Answers
Suggested answer: B

Explanation:


Amazon Aurora Serverless v1 is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. https://aws.amazon.com/rds/aurora/serverless/

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office.

Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

A.
Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.
A.
Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.
Answers
B.
Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multiactive replication to ensure that updates are quickly propagated to eu-west-2.
B.
Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multiactive replication to ensure that updates are quickly propagated to eu-west-2.
Answers
C.
Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.
C.
Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.
Answers
D.
Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west- 2. Configure the dashboard application to read from the read replica.
D.
Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west- 2. Configure the dashboard application to read from the read replica.
Answers
Suggested answer: C

Explanation:


Amazon Aurora global databases span multiple AWS Regions, enabling low latency global reads and providing fast recovery from the rare outage that might affect an entire AWS Region. An Aurora global database has a primary DB cluster in one Region, and up to five secondary DB clusters in different Regions. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/auroraglobal- database.html

A significant automotive manufacturer is switching a mission-critical finance application's database to Amazon DynamoDB. According to the company's risk and compliance policy, any update to the database must be documented as a log entry for auditing purposes. Each minute, the system anticipates about 500,000 log entries. Log entries should be kept in Apache Parquet files in batches of at least 100,000 records per file.

How could a database professional approach these needs while using DynamoDB?

A.
Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
A.
Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
Answers
B.
Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
B.
Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
Answers
C.
Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
C.
Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
Answers
D.
Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
D.
Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
Answers
Suggested answer: D

Explanation:


Total 321 questions
Go to page: of 33