ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user’s browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and Indi a. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.

Which database solution meets these requirements?

A.
Amazon DocumentDB
A.
Amazon DocumentDB
Answers
B.
Amazon RDS Multi-AZ deployment
B.
Amazon RDS Multi-AZ deployment
Answers
C.
Amazon DynamoDB global table
C.
Amazon DynamoDB global table
Answers
D.
Amazon Aurora Global Database
D.
Amazon Aurora Global Database
Answers
Suggested answer: C

Explanation:


Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users. How should the Database Specialist apply the parameter group change for the DB instance?

A.
Select the option to apply the change immediately
A.
Select the option to apply the change immediately
Answers
B.
Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
B.
Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
Answers
C.
Apply the change manually by rebooting the DB instance during the approved maintenance window
C.
Apply the change manually by rebooting the DB instance during the approved maintenance window
Answers
D.
Reboot the secondary Multi-AZ DB instance
D.
Reboot the secondary Multi-AZ DB instance
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html#USER_WorkingWithParamGroups.Modifying

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency.

The database should be fault tolerant with minimal operational overhead and development effort.

Which solution meets these requirements in the MOST efficient way?

A.
Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
A.
Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
Answers
B.
Use Amazon DynamoDB as the database and use DynamoDB Accelerator
B.
Use Amazon DynamoDB as the database and use DynamoDB Accelerator
Answers
C.
Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache
C.
Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache
Answers
D.
Use Amazon DynamoDB as the database and use Amazon API Gateway
D.
Use Amazon DynamoDB as the database and use Amazon API Gateway
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20of%20requests%20per%20second. "Amazon DynamoDB Accelerator (DAX) is afully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10xperformance improvement – from milliseconds to microseconds – even at millions of requests persecond. "

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

A.
Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.
A.
Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.
Answers
B.
Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.
B.
Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.
Answers
C.
Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier- 1 for the other replicas.
C.
Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier- 1 for the other replicas.
Answers
D.
Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.
D.
Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.clustercache-mgmt.html

https://aws.amazon.com/blogs/database/introduction-to-aurora-postgresql-cluster-cachemanagement/

"You can customize the order in which your Aurora Replicas are promoted to the primary instance after a failure by assigning each replica a priority. Priorities range from 0 for the first priority to 15 for the last priority. If the primary instance fails, Amazon RDS promotes the Aurora Replica with the better priority to the new primary instance. You can modify the priority of an Aurora Replica at any time. Modifying the priority doesn't trigger a failover. More than one Aurora Replica can share the same priority, resulting in promotion tiers. If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier. "Amazon Aurora with PostgreSQL compatibility now supports cluster cache management, providing a faster path to full performance if there's a failover. With cluster cache management, you designate a specific reader DB instance in your Aurora PostgreSQL cluster as the failover target. Cluster cache management keeps the data in the designated reader's cache synchronized with the data in the read-write instance's cache. If a failover occurs, the designated reader is promoted to be the new read-write instance, and workloads benefit immediately from the data in its cache.

A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.

Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

A.
Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
A.
Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
Answers
B.
Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
B.
Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
Answers
C.
Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
C.
Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
Answers
D.
Use Amazon QuickSight to view the SQL statement being run.
D.
Use Amazon QuickSight to view the SQL statement being run.
Answers
E.
Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.
E.
Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.
Answers
Suggested answer: B, E

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/

"Severalfactors can cause an increase in CPU utilization. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks and lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources. First, you can identify the source of the CPU usage by: Using Enhanced Monitoring Using Performance Insights"

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replic a. The Database Specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?

A.
Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
A.
Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
Answers
B.
Use reader endpoints for both the read-only workload applications.
B.
Use reader endpoints for both the read-only workload applications.
Answers
C.
Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
C.
Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
Answers
D.
Use custom endpoints for the two read-only applications.
D.
Use custom endpoints for the two read-only applications.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workloadmanagement-with-custom-endpoints/

An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:

Update scores in real time whenever a player is playing the game. Retrieve a player’s score details for a specific game session.

A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.

Which choice of keys is recommended for the DynamoDB table?

A.
Create a global secondary index with game_id as the partition key
A.
Create a global secondary index with game_id as the partition key
Answers
B.
Create a global secondary index with user_id as the partition key
B.
Create a global secondary index with user_id as the partition key
Answers
C.
Create a composite primary key with game_id as the partition key and user_id as the sort key
C.
Create a composite primary key with game_id as the partition key and user_id as the sort key
Answers
D.
Create a composite primary key with user_id as the partition key and game_id as the sort key
D.
Create a composite primary key with user_id as the partition key and game_id as the sort key
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/blogs/database/amazon-dynamodb-gaming-use-cases-and-designpatterns/"EA uses the user ID as the partition key and primary key (a 1:1 modeling pattern)."

https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

"Partition key and sort key: Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key."

A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time- consuming, so it is not an option.

How should the Database Specialist satisfy this new requirement?

A.
Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
A.
Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
Answers
B.
Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
B.
Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
Answers
C.
Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
C.
Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
Answers
D.
Create an encrypted read replica of the RDS DB instance. Promote it the master.
D.
Create an encrypted read replica of the RDS DB instance. Promote it the master.
Answers
Suggested answer: A

Explanation:


"However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance. For more information, see Copying a Snapshot."

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi- AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.

What is the most likely reason for this?

A.
The source DB instance has to be converted to Single-AZ first to create a read replica from it.
A.
The source DB instance has to be converted to Single-AZ first to create a read replica from it.
Answers
B.
Enhanced Monitoring is not enabled on the source DB instance.
B.
Enhanced Monitoring is not enabled on the source DB instance.
Answers
C.
The minor MySQL version in the source DB instance does not support read replicas.
C.
The minor MySQL version in the source DB instance does not support read replicas.
Answers
D.
Automated backups are not enabled on the source DB instance.
D.
Automated backups are not enabled on the source DB instance.
Answers
Suggested answer: D

Explanation:


>Your source DB instance must have backup retention enabled.

https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstanceReadReplica.html

Reference: https://aws.amazon.com/rds/features/read-replicas/

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL.

The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.

How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

A.
Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
A.
Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
Answers
B.
Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
B.
Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
Answers
C.
Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
C.
Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
Answers
D.
Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
D.
Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/schedule-jobs-for-amazon-rdsand-aurora-postgresql-using-lambda-and-secrets-manager.html

a job for data extraction or a job for data purging can easily be scheduled using cron. For these jobs, database credentials are typically either hard-coded or stored in a properties file. However, when you migrate to Amazon Relational Database Service (Amazon RDS) or Amazon Aurora PostgreSQL, you lose the ability to log in to the host instance to schedule cron jobs. This pattern describes how touse AWS Lambda and AWS Secrets Manager to schedule jobs for Amazon RDS and Aurora PostgreSQL databases after migration.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html

Total 321 questions
Go to page: of 33