ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

A.
Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
A.
Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
Answers
B.
Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.
B.
Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.
Answers
C.
Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.
C.
Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.
Answers
D.
Create an AWS Backup plan and assign the DynamoDB table as a resource.
D.
Create an AWS Backup plan and assign the DynamoDB table as a resource.
Answers
Suggested answer: C

Explanation:


A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.

Which strategy would allow for a successful migration with the LEAST amount of downtime?

A.
Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.
A.
Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.
Answers
B.
Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.
B.
Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.
Answers
C.
Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.
C.
Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.
Answers
D.
Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.
D.
Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.
Answers
Suggested answer: B

Explanation:


A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

A.
Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
A.
Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
Answers
B.
Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
B.
Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
Answers
C.
Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
C.
Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
Answers
D.
Change the DB clusters to the burstable instance family.
D.
Change the DB clusters to the burstable instance family.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application’s cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time.

Which approach will meet these requirements?

A.
Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
A.
Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
Answers
B.
Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
B.
Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
Answers
C.
Use Amazon DynamoDB in on-demand capacity mode.
C.
Use Amazon DynamoDB in on-demand capacity mode.
Answers
D.
Use Amazon S3 and load the data from flat files.
D.
Use Amazon S3 and load the data from flat files.
Answers
Suggested answer: D

Explanation:


https://www.sumologic.com/insight/s3-cost-optimization/

For example, for 1 GB file stored on S3 with 1 TB of storage provisioned, you are billed for 1 GB only.

In a lot of other services such as Amazon EC2, Amazon Elastic Block Storage (Amazon EBS) and Amazon DynamoDB you pay for provisioned capacity. For example, in the case of Amazon EBS disk you pay for the size of 1 TB of disk even if you just save 1 GB file. This makes managing S3 cost easier than many other services including Amazon EBS and Amazon EC2. On S3 there is no risk of overprovisioning and no need to manage disk utilization.

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

A.
Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.
A.
Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.
Answers
B.
Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.
B.
Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.
Answers
C.
Use the AWS CLI to update the DynamoDB table and modify the partition key.
C.
Use the AWS CLI to update the DynamoDB table and modify the partition key.
Answers
D.
Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.
D.
Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.
Answers
Suggested answer: A

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/back-up-dynamodb-s3/

A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

A.
Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
A.
Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
Answers
B.
Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
B.
Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
Answers
C.
Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
C.
Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
Answers
D.
Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.
D.
Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/blogs/infrastructure-and-automation/securing-passwords-in-aws-quickstarts-using-aws-secrets-manager/

A company’s database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.

What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

A.
Change the restored cluster’s parameter group to the original cluster’s custom parameter group.
A.
Change the restored cluster’s parameter group to the original cluster’s custom parameter group.
Answers
B.
Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.
B.
Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.
Answers
C.
Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.
C.
Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.
Answers
D.
Run the syncInstances command in AWS DataSync.
D.
Run the syncInstances command in AWS DataSync.
Answers
Suggested answer: A

Explanation:


You can't modify the parameter settings of the default parameter groups. You can use a DB parameter group to act as a container for engine configuration values that are applied to one or more DB instances. If you create a DB instance without specifying a DB parameter group, the DB instance uses a default DB parameter group. Each default DB parameter group contains database engine defaults and Amazon RDS system defaults. You can't modify the parameter settings of a default parameter group. Instead, you create your own parameter group where you choose your own parameter settings. Not all DB engine parameters can be changed in a parameter group that you create.

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.

Which AWS solution meets these requirements?

A.
Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.
A.
Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.
Answers
B.
Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.
B.
Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.
Answers
C.
Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.
C.
Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.
Answers
D.
Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.
D.
Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.
Answers
Suggested answer: D

Explanation:


RDS event subscriptions do not cover "data is inserted into a table" - see

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Events.Messages.html

We can use stored procedure to invoke Lambda function -

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html

A company needs to migrate Oracle Database Standard Edition running on an Amazon EC2 instance to an Amazon RDS for Oracle DB instance with Multi-AZ. The database supports an ecommerce website that runs continuously. The company can only provide a maintenance window of up to 5 minutes.

Which solution will meet these requirements?

A.
Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.
A.
Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.
Answers
B.
Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.
B.
Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.
Answers
C.
Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
C.
Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
Answers
D.
Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
D.
Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
Answers
Suggested answer: C

Explanation:


Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html

A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.

What should the company do to resolve these performance issues?

A.
Add an Aurora Replica to scale the read traffic.
A.
Add an Aurora Replica to scale the read traffic.
Answers
B.
Scale up the DB instance class.
B.
Scale up the DB instance class.
Answers
C.
Modify applications to commit transactions in batches.
C.
Modify applications to commit transactions in batches.
Answers
D.
Modify applications to avoid conflicts by taking locks.
D.
Modify applications to avoid conflicts by taking locks.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Reference.html

https://blog.dbi-services.com/aws-aurora-xactsync-batch-commit/

Total 321 questions
Go to page: of 33