ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts. How should a Database Specialist address these requirements?

A.
Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
A.
Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
Answers
B.
Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
B.
Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
Answers
C.
Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
C.
Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
Answers
D.
Use DynamoDB Accelerator to offload the reads
D.
Use DynamoDB Accelerator to offload the reads
Answers
Suggested answer: D

Explanation:


https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/DAX.html

"Applications that are read-intensive, but are also cost-sensitive. With DynamoDB, you provision the number of reads per second that your application requires. If read activity increases, you can increase your tables' provisioned read throughput (at an additional cost). Or, you can offload th eactivity from your application to a DAX cluster, and reduce the number of read capacity units that you need to purchase otherwise."

An IT consulting company wants to reduce costs when operating its development environment databases. The company’s workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks.

Which of the following provides the MOST cost-effective solution?

A.
Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.
A.
Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.
Answers
B.
Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.
B.
Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.
Answers
C.
Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.
C.
Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.
Answers
D.
Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.
D.
Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.
Answers
Suggested answer: B

Explanation:


Aurora Serverless is not compatible to all Aurora provisioned engine version. However, you can do clone with most engine version. Meanwhile, I also consider the performance while restoring snapshot to Aurora Serverless.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-itworks.html#aurora-serverless.how-it-works.pause-resume

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html#auroraserverless.use-cases

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east- 1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.

When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server:

Connection times out” error message to Amazon CloudWatch Logs.

What is the cause of this error?

A.
The user name and password the application is using are incorrect.
A.
The user name and password the application is using are incorrect.
Answers
B.
The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
B.
The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
Answers
C.
The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
C.
The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
Answers
D.
The user name and password are correct, but the user is not authorized to use the DB instance.
D.
The user name and password are correct, but the user is not authorized to use the DB instance.
Answers
Suggested answer: C

Explanation:


Reference: https://forums.aws.amazon.com/thread.jspa?threadID=129700

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future. Which settings will meet this requirement? (Choose three.)

A.
Set DeletionProtection to True
A.
Set DeletionProtection to True
Answers
B.
Set MultiAZ to True
B.
Set MultiAZ to True
Answers
C.
Set TerminationProtection to True
C.
Set TerminationProtection to True
Answers
D.
Set DeleteAutomatedBackups to False
D.
Set DeleteAutomatedBackups to False
Answers
E.
Set DeletionPolicy to Delete
E.
Set DeletionPolicy to Delete
Answers
F.
Set DeletionPolicy to Retain
F.
Set DeletionPolicy to Retain
Answers
Suggested answer: A, C, F

Explanation:


Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attributedeletionpolicy.html

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.

What is the MOST likely cause of the 5-minute connection outage?

A.
After a database crash, Aurora needed to replay the redo log from the last database checkpoint
A.
After a database crash, Aurora needed to replay the redo log from the last database checkpoint
Answers
B.
The client-side application is caching the DNS data and its TTL is set too high
B.
The client-side application is caching the DNS data and its TTL is set too high
Answers
C.
After failover, the Aurora DB cluster needs time to warm up before accepting client connections
C.
After failover, the Aurora DB cluster needs time to warm up before accepting client connections
Answers
D.
There were no active Aurora Replicas in the Aurora DB cluster
D.
There were no active Aurora Replicas in the Aurora DB cluster
Answers
Suggested answer: B

Explanation:


When your application tries to establish a connection after a failover, the new Aurora PostgreSQL writer will be a previous reader, which can be found using the Aurora read only endpoint before DNS updates have fully propagated. Setting the java DNS TTL to a low value helps cycle between reader nodes on subsequent connection attempts.

Amazon Aurora is designed to recover from a crash almost instantaneously and continue to serve your application data. Unlike other databases, after a crash Amazon Aurora does not need to replay the redo log from the last database checkpoint before making the database available for operations.

Amazon Aurora performs crash recovery asynchronously on parallel threads, so your database is open and available immediately after a crash. Because the storage is organized in many small segments, each with its own redo log, the underlying storage can replay redo records on demand in parallel and asynchronously as part of a disk read after a crash. This approach reduces database restart times to less than 60 seconds in most cases

A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.

Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Auror a. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.

What should the Database Specialist do to correct the Data Analysts’ inability to connect?

A.
Restart the DB cluster to apply the SSL change.
A.
Restart the DB cluster to apply the SSL change.
Answers
B.
Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
B.
Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
Answers
C.
Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.
C.
Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.
Answers
D.
Modify the Data Analysts’ local client firewall to allow network traffic to AWS.
D.
Modify the Data Analysts’ local client firewall to allow network traffic to AWS.
Answers
Suggested answer: B

Explanation:


• To connect using SSL:

• Provide the SSLTrust certificate (can be downloaded from AWS)

• Provide SSL options when connecting to database

• Not using SSL on a DB that enforces SSL would result in error

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ssl-certificate-rotation-aurorapostgresql.html

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

A.
Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
A.
Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
Answers
B.
Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
B.
Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
Answers
C.
Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
C.
Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
Answers
D.
Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
D.
Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

A.
Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
A.
Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Answers
B.
Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
B.
Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
Answers
C.
Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
C.
Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
Answers
D.
Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
D.
Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Answers
Suggested answer: C

Explanation:


A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on- premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.

Which approach should the Database Specialist take to securely manage the database credentials?

A.
Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
A.
Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
Answers
B.
Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
B.
Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
Answers
C.
Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
C.
Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
Answers
D.
Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
D.
Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
Answers
Suggested answer: C

Explanation:


A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.

Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

A.
Enable in-transit and at-rest encryption on the ElastiCache cluster.
A.
Enable in-transit and at-rest encryption on the ElastiCache cluster.
Answers
B.
Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
B.
Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
Answers
C.
Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
C.
Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
Answers
D.
Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
D.
Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
Answers
E.
Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
E.
Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
Answers
F.
Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
F.
Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
Answers
Suggested answer: A, C, F

Explanation:


https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

Total 321 questions
Go to page: of 33