ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.

What could be causing these slow response times?

A.
New volumes created from snapshots load lazily in the background
A.
New volumes created from snapshots load lazily in the background
Answers
B.
Long-running statements on the master
B.
Long-running statements on the master
Answers
C.
Insufficient resources on the master
C.
Insufficient resources on the master
Answers
D.
Overload of a single replication thread by excessive writes on the master
D.
Overload of a single replication thread by excessive writes on the master
Answers
Suggested answer: A

Explanation:

snapshot is lazy loaded If the volume is accessed where the data is not loaded, the application accessing the volume encounters a higher latency than normal while the data gets loaded

https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ebs-fast-snapshot-restoreeliminates-need-for-prewarming-data-into-volumes-created-snapshots/

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hardcoded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.

Which solution will enable this change?

A.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.
A.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.
Answers
B.
Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
B.
Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
Answers
C.
Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
C.
Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
Answers
D.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
D.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
Answers
Suggested answer: B

Explanation:


Input parameter and FindInMap You can use an input parameter with the Fn::FindInMap function to refer to a specific value in a map. For example, suppose you have a list of regions and environment types that map to a specific AMI ID.

You can select the AMI ID that your stack uses by using an input parameter (EnvironmentType). To determine the region, use the AWS::Region pseudo parameter, which gets the AWS Region in which you create the stack.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-sectionstructure.html

A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company’s main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application dat a. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.

Which solution meets these requirements?

A.
Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap- northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
A.
Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap- northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
Answers
B.
Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the apnortheast- 1 Region. Use Amazon QuickSight for displaying dashboard results.
B.
Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the apnortheast- 1 Region. Use Amazon QuickSight for displaying dashboard results.
Answers
C.
Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.
C.
Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.
Answers
D.
Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica apnortheast- 1 Region.
D.
Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica apnortheast- 1 Region.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/blogs/database/aurora-postgresql-disaster-recovery-solutions-usingamazon-aurora-global-database/

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

A.
Update the log_connections parameter in the default parameter group
A.
Update the log_connections parameter in the default parameter group
Answers
B.
Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
B.
Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
Answers
C.
Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
C.
Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
Answers
D.
Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
D.
Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
Answers
E.
Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file
E.
Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file
Answers
Suggested answer: A, E

Explanation:


Reference: https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logspart-1/

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = useast- 1. Please verify your S3 configuration.” Which combination of actions should the Database Specialist take to troubleshoot the problem?

(Choose two.)

A.
Check that Amazon S3 has an IAM role granting read access to Neptune
A.
Check that Amazon S3 has an IAM role granting read access to Neptune
Answers
B.
Check that an Amazon S3 VPC endpoint exists
B.
Check that an Amazon S3 VPC endpoint exists
Answers
C.
Check that a Neptune VPC endpoint exists
C.
Check that a Neptune VPC endpoint exists
Answers
D.
Check that Amazon EC2 has an IAM role granting read access to Amazon S3
D.
Check that Amazon EC2 has an IAM role granting read access to Amazon S3
Answers
E.
Check that Neptune has an IAM role granting read access to Amazon S3
E.
Check that Neptune has an IAM role granting read access to Amazon S3
Answers
Suggested answer: B, D

Explanation:


Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-could-not-connectendpoint-url/

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.

What is the MOST operationally efficient and cost-effective solution?

A.
Configure RDS Storage Auto Scaling.
A.
Configure RDS Storage Auto Scaling.
Answers
B.
Configure RDS instance Auto Scaling.
B.
Configure RDS instance Auto Scaling.
Answers
C.
Modify the DB instance allocated storage to meet the forecasted requirements.
C.
Modify the DB instance allocated storage to meet the forecasted requirements.
Answers
D.
Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.
D.
Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.
Answers
Suggested answer: A

Explanation:


If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage.

https://aws.amazon.com/aboutaws/ whats-new/2019/06/rds-storage-auto-scaling/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling

A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company’s network infrastructure has limited network bandwidth that is shared with other applications.

Which solution should a database specialist use for a timely migration?

A.
Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
A.
Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
Answers
B.
Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
B.
Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
Answers
C.
Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
C.
Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
Answers
D.
Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
D.
Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html Using Amazon S3 as atarget for AWS Database Migration Service

A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.

Which solution would meet these requirements and deploy the DynamoDB tables?

A.
Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
A.
Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
Answers
B.
Create an AWS CloudFormation template and deploy the template to all the Regions.
B.
Create an AWS CloudFormation template and deploy the template to all the Regions.
Answers
C.
Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
C.
Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
Answers
D.
Create DynamoDB tables using the AWS Management Console in all the Regions and create a stepby- step guide for future deployments.
D.
Create DynamoDB tables using the AWS Management Console in all the Regions and create a stepby- step guide for future deployments.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-acrossmultiple-aws-accounts-and-regions/

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

A.
Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
A.
Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
Answers
B.
Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
B.
Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
Answers
C.
Enable Amazon RDS Performance Insights and review the appropriate dashboard
C.
Enable Amazon RDS Performance Insights and review the appropriate dashboard
Answers
D.
Enable Enhanced Monitoring will the appropriate settings
D.
Enable Enhanced Monitoring will the appropriate settings
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.Enabling.html

https://aws.amazon.com/rds/performance-insights/

https://aws.amazon.com/blogs/database/tuning-amazon-rds-for-mysql-with-performance-insights/

A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.

What should the company do to achieve this in the shortest amount of time?

A.
Use a blue-green deployment with a complete application-level failover test
A.
Use a blue-green deployment with a complete application-level failover test
Answers
B.
Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
B.
Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
Answers
C.
Use RDS fault injection queries to simulate the primary node failure
C.
Use RDS fault injection queries to simulate the primary node failure
Answers
D.
Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
D.
Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RebootInstance.html

https://exain.wordpress.com/2017/07/12/amazon-rds-multi-az-setup-failover-simulation/

"Rebooting with failover is beneficial when you want to simulate a failure of a DB instance for testing, or restore operations to the original AZ after a failover occurs."

Total 321 questions
Go to page: of 33