ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev- VPC1.

What is likely causing the timeouts?

A.
The database is deployed in a VPC that is in a different Region.
A.
The database is deployed in a VPC that is in a different Region.
Answers
B.
The database is deployed in a VPC that is in a different Availability Zone.
B.
The database is deployed in a VPC that is in a different Availability Zone.
Answers
C.
The database is deployed with misconfigured security groups.
C.
The database is deployed with misconfigured security groups.
Answers
D.
The database is deployed with the wrong client connect timeout configuration.
D.
The database is deployed with the wrong client connect timeout configuration.
Answers
Suggested answer: C

Explanation:


"A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region."

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html

A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.

How should the company identify the source of the problem?

A.
Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.
A.
Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.
Answers
B.
Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.
B.
Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.
Answers
C.
Use AWS X-Ray deployed with Amazon RDS to track query system traces.
C.
Use AWS X-Ray deployed with Amazon RDS to track query system traces.
Answers
D.
Create a support request and work with AWS Support to identify the source of the issue.
D.
Create a support request and work with AWS Support to identify the source of the issue.
Answers
Suggested answer: B

Explanation:


Amazon RDS Performance Insights is a database performance tuning and monitoring feature that helps you quickly assess the load on your database, and determine when and where to take action.

Performance Insights allows non-experts to detect performance problems with an easy-tounderstand dashboard that visualizes database load. https://aws.amazon.com/rds/performanceinsights/

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table.

The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier.

Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

A.
Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
A.
Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
Answers
B.
Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
B.
Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
Answers
C.
Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
C.
Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
Answers
D.
Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.
D.
Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.
Answers
Suggested answer: D

Explanation:


Plant id as partition key and Sensor id as a sort key. Fault can be identified quickly using the local secondary index and associated plant and sensor can be identified easily.

A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table.

Periodically, the other users’ devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.

Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.

Which recommendation would resolve this issue?

A.
Ensure the DynamoDB table is configured to be always consistent.
A.
Ensure the DynamoDB table is configured to be always consistent.
Answers
B.
Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
B.
Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
Answers
C.
Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
C.
Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
Answers
D.
Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.
D.
Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.
Answers
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/developerguide/API_BatchGetItem_v20111205.html

By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.

Which process should the database specialist recommend?

A.
Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.
A.
Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.
Answers
B.
Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.
B.
Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.
Answers
C.
Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.
C.
Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.
Answers
D.
Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.
D.
Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.

Which actions would improve the data migration speed? (Choose three.)

A.
Create multiple AWS DMS tasks to migrate the large table.
A.
Create multiple AWS DMS tasks to migrate the large table.
Answers
B.
Configure the AWS DMS replication instance with Multi-AZ.
B.
Configure the AWS DMS replication instance with Multi-AZ.
Answers
C.
Increase the capacity of the AWS DMS replication server.
C.
Increase the capacity of the AWS DMS replication server.
Answers
D.
Establish an AWS Direct Connect connection between the on-premises data center and AWS.
D.
Establish an AWS Direct Connect connection between the on-premises data center and AWS.
Answers
E.
Enable an Amazon RDS Multi-AZ configuration.
E.
Enable an Amazon RDS Multi-AZ configuration.
Answers
F.
Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
F.
Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
Answers
Suggested answer: C, D, E

Explanation:


A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Auror a. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.

Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
Answers
B.
Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
B.
Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
Answers
C.
Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
C.
Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
Answers
D.
Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
D.
Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
Answers
E.
Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
E.
Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
Answers
Suggested answer: A, C

Explanation:


A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company’s development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.

How should this be accomplished?

A.
Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.
A.
Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.
Answers
B.
Create a dump file from the DB cluster. Load the dump file into a new DB cluster.
B.
Create a dump file from the DB cluster. Load the dump file into a new DB cluster.
Answers
C.
Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.
C.
Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.
Answers
D.
Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.
D.
Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.
Answers
Suggested answer: C

Explanation:


A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

A.
Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
A.
Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
Answers
B.
Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
B.
Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
Answers
C.
Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
C.
Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
Answers
D.
Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.
D.
Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html

A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.

Which approach meets these requirements with no negative performance impact?

A.
Enable synchronous replication.
A.
Enable synchronous replication.
Answers
B.
Enable asynchronous binlog replication.
B.
Enable asynchronous binlog replication.
Answers
C.
Create an Aurora Global Database.
C.
Create an Aurora Global Database.
Answers
D.
Copy Aurora incremental snapshots to the us-east-1 Region.
D.
Copy Aurora incremental snapshots to the us-east-1 Region.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-databasedisaster-recovery.html

Total 321 questions
Go to page: of 33