ExamGecko
Home / Amazon / DBS-C01 / List of questions
Ask Question

Amazon DBS-C01 Practice Test - Questions Answers, Page 19

List of questions

Question 181

Report
Export
Collapse

Recently, an ecommerce business transferred one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition database instance. The corporation anticipates an increase in read traffic as a result of an approaching sale. To accommodate the projected read load, a database professional must establish a read replica of the database instance.

Which procedures should the database professional do prior to establishing the read replica? (Select two.)

Identify a potential downtime window and stop the application calls to the source DB instance.
Identify a potential downtime window and stop the application calls to the source DB instance.
Ensure that automatic backups are enabled for the source DB instance.
Ensure that automatic backups are enabled for the source DB instance.
Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
Modify the read replica parameter group setting and set the value to 1.
Modify the read replica parameter group setting and set the value to 1.
Suggested answer: B, C

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.ReadReplicas.html

asked 16/09/2024
Maurice Melgert
38 questions

Question 182

Report
Export
Collapse

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent dat a. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Set DeletionProtection to True
Set DeletionProtection to True
Set MultiAZ to True
Set MultiAZ to True
Set TerminationProtection to True
Set TerminationProtection to True
Set DeleteAutomatedBackups to False
Set DeleteAutomatedBackups to False
Set DeletionPolicy to Delete
Set DeletionPolicy to Delete
Set DeletionPolicy to Retain
Set DeletionPolicy to Retain
Suggested answer: A, D, F

Explanation:


A - https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-rds-now-provides-database-deletion-protection/

D-https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

F - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

asked 16/09/2024
Cyrom Meryll Santos
36 questions

Question 183

Report
Export
Collapse

On AWS, a business is developing a web application. The application needs that the database supports concurrent read and write activities in several AWS Regions. Additionally, the database must communicate data changes across Regions as they occur. The application must be highly available and have a latency of less than a few hundred milliseconds.

Which solution satisfies these criteria?

Amazon DynamoDB global tables
Amazon DynamoDB global tables
Amazon DynamoDB streams with AWS Lambda to replicate the data
Amazon DynamoDB streams with AWS Lambda to replicate the data
An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
An Amazon Aurora global database
An Amazon Aurora global database
Suggested answer: A

Explanation:


Aurora Global Databases provides a writer and a reader endpoints in the primary region but only a reader endpoints in other region. Although strongly consistent, it does not fulfill the requirements that "there are plenty of read / write activities" in all regions.

asked 16/09/2024
inigo abeledo
39 questions

Question 184

Report
Export
Collapse

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC

The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs. What is the best course of action for a database professional to take in order to remedy this issue?
The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs. What is the best course of action for a database professional to take in order to remedy this issue?
Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.
Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.
Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.
Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.
Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.
Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.
Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.
Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html

asked 16/09/2024
Javier Portabales
40 questions

Question 185

Report
Export
Collapse

A ride-hailing application stores bookings in a persistent Amazon RDS for MySQL DB instance. This program is very popular, and the corporation anticipates a tenfold rise in the application's user base over the next several months. The application receives a higher volume of traffic in the morning and evening.

This application is divided into two sections:

An internal booking component that takes online reservations in response to concurrent user queries.

A component of a third-party customer relationship management (CRM) system that customer service professionals utilize. Booking data is accessed using queries in the CRM.

To manage this workload effectively, a database professional must create a cost-effective database system.

Which solution satisfies these criteria?

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Suggested answer: B

Explanation:


"AWS Lambda function to capture changes" capture changes to what? ElastiCache? The main use of ElastiCache is to cache frequently read data. Also "the company expects a tenfold increase in the user base" and "correspond to simultaneous requests from users"

asked 16/09/2024
Francesco Balducci
36 questions

Question 186

Report
Export
Collapse

Amazon Neptune is being used by a corporation as the graph database for one of its products. During an ETL procedure, the company's data science team produced enormous volumes of temporary data by unintentionally. The Neptune DB cluster extended its storage capacity automatically to handle the added data, but the data science team erased the superfluous data.

What should a database professional do to prevent incurring extra expenditures for cluster volume space that is not being used?

Take a snapshot of the cluster volume. Restore the snapshot in another cluster with a smaller volume size.
Take a snapshot of the cluster volume. Restore the snapshot in another cluster with a smaller volume size.
Use the AWS CLI to turn on automatic resizing of the cluster volume.
Use the AWS CLI to turn on automatic resizing of the cluster volume.
Export the cluster data into a new Neptune DB cluster.
Export the cluster data into a new Neptune DB cluster.
Add a Neptune read replica to the cluster. Promote this replica as a new primary DB instance. Reset the storage space of the cluster.
Add a Neptune read replica to the cluster. Promote this replica as a new primary DB instance. Reset the storage space of the cluster.
Suggested answer: C

Explanation:


The only way to shrink the storage space used by your DB cluster when you have a large amount of unused allocated space is to export all the data in your graph and then reload it into a new DB cluster.

Creating and restoring a snapshot does not reduce the amount of storage allocated for your DB cluster, because a snapshot retains the original image of the cluster's underlying storage.

asked 16/09/2024
Beena Sagayaraj
42 questions

Question 187

Report
Export
Collapse

A bank intends to utilize Amazon RDS to host a MySQL database instance. The database should be able to handle high-volume read requests with extremely few repeated queries.

Which solution satisfies these criteria?

Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.
Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.
Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.
Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.
Change the DB instance to Multi-AZ with a standby instance in another AWS Region.
Change the DB instance to Multi-AZ with a standby instance in another AWS Region.
Create a read replica of the DB instance. Use the read replica to distribute the read traffic.
Create a read replica of the DB instance. Use the read replica to distribute the read traffic.
Suggested answer: D

Explanation:


asked 16/09/2024
Krishna chaithanya
38 questions

Question 188

Report
Export
Collapse

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

Create a stack policy to prevent updates. Include “Effect” : “ProductionDatabase” and “Resource” : “Deny” in the policy.
Create a stack policy to prevent updates. Include “Effect” : “ProductionDatabase” and “Resource” : “Deny” in the policy.
Create an AWS CloudFormation stack in XML format. Set xAttribute as false.
Create an AWS CloudFormation stack in XML format. Set xAttribute as false.
Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.
Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.
Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.
Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

"When you set a stack policy, all resources are protected by default. To allow updates on all resources, we add an Allow statement that allows all actions on all resources. Although the Allow statement specifies all resources, the explicit Deny statement overrides it for the resource with the ProductionDatabase logical ID. This Deny statement prevents all update actions, such as replacement or deletion, on the ProductionDatabase resource."

asked 16/09/2024
Anupam Ojha
41 questions

Question 189

Report
Export
Collapse

Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.

Which strategy will resolve the issue?

Configure all replica tables to use DynamoDB auto scaling.
Configure all replica tables to use DynamoDB auto scaling.
Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
Configure the table-level write throughput limit service quota to a higher value.
Configure the table-level write throughput limit service quota to a higher value.
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_reqs_bestpractices.html

asked 16/09/2024
Gerrit Struik
54 questions

Question 190

Report
Export
Collapse

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

DynamoDB Streams
DynamoDB Streams
DynamoDB with DynamoDB Accelerator
DynamoDB with DynamoDB Accelerator
DynamoDB with on-demand capacity mode
DynamoDB with on-demand capacity mode
DynamoDB with provisioned capacity mode with Auto Scaling
DynamoDB with provisioned capacity mode with Auto Scaling
Suggested answer: D

Explanation:


"On-demand is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes" vs. 'DynamoDB released auto scaling to make it easier for you to manage capacity efficiently, and auto scaling continues to help DynamoDB users lower the cost of workloads that have a predictable traffic pattern."

https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-costoptimization-at-any-scale/

asked 16/09/2024
Simone Perego
42 questions
Total 321 questions
Go to page: of 33
Search

Related questions