ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 8

Question list
Search
Search

List of questions

Search

Related questions











A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device What is the cause of this error and what should the Database Specialist do to resolve this issue?

A.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.
A.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.
Answers
B.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.
B.
The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.
Answers
C.
The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.
C.
The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.
Answers
D.
The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.
D.
The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.
Answers
Suggested answer: C

Explanation:


Reference: https://serverfault.com/QUESTION NO:s/109828/how-can-i-tune-postgres-to-avoid-thiserror

A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster.

The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.

Which solution addresses these requirements?

A.
Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
A.
Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
Answers
B.
Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
B.
Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
Answers
C.
Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
C.
Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
Answers
D.
Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.
D.
Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.
Answers
Suggested answer: D

Explanation:


PostgreSQL: sslrootcert=rds-cert.pem sslmode=[verify-ca | verify-full]

Reference: https://forums.aws.amazon.com/message.jspa?messageID=734076

A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours. Which solution will meet these requirements and is the MOST operationally efficient?

A.
Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company’s Amazon S3 bucket.
A.
Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company’s Amazon S3 bucket.
Answers
B.
Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
B.
Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
Answers
C.
Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
C.
Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
Answers
D.
Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.
D.
Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.
Answers
Suggested answer: A

Explanation:


Unlike automated backups, manual snapshots aren't subject to the backup retention period.

Snapshots don't expire. For very long-term backups of MariaDB, MySQL, and PostgreSQL data, we recommend exporting snapshot data to Amazon S3. If the major version of your DB engine is no longer supported, you can't restore to that version from a snapshot.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.

Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?

A.
Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret’s password value with a randomly generated string set by the GenerateSecretString property.
A.
Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret’s password value with a randomly generated string set by the GenerateSecretString property.
Answers
B.
Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
B.
Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
Answers
C.
Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
C.
Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
Answers
D.
Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.
D.
Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.
Answers
Suggested answer: C

Explanation:


Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-secrettargetattachment.html

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive dat a. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A.
Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
A.
Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
Answers
B.
Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
B.
Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
Answers
C.
Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
C.
Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
Answers
D.
Define access privileges to the tables containing sensitive data in the pg_hba.conf file.
D.
Define access privileges to the tables containing sensitive data in the pg_hba.conf file.
Answers
Suggested answer: C

Explanation:


Reference: https://aws.amazon.com/blogs/database/managing-postgresql-users-and-roles/

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

A.
Stop the DB cluster and analyze how the website responds
A.
Stop the DB cluster and analyze how the website responds
Answers
B.
Use Aurora fault injection to crash the master DB instance
B.
Use Aurora fault injection to crash the master DB instance
Answers
C.
Remove the DB cluster endpoint to simulate a master DB instance failure
C.
Remove the DB cluster endpoint to simulate a master DB instance failure
Answers
D.
Use Aurora Backtrack to crash the DB cluster
D.
Use Aurora Backtrack to crash the DB cluster
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.FaultInjectionQueries.html

"You can test the fault tolerance of your Amazon Aurora DB cluster by using fault injection queries.

Fault injection queries are issued as SQL commands to an Amazon Aurora instance and they enable you to schedule a simulated occurrence of one of the following events: A crash of a writer or reader DB instance A failure of an Aurora Replica A disk failure Disk congestion When a fault injection query specifies a crash, it forces a crash of the Aurora DB instance. The other fault injection queries result in simulations of failure events, but don't cause the event to occur.

When you submit a fault injection query, you also specify an amount of time for the failure event simulation to occur for."

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

A.
Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
A.
Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
Answers
B.
Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
B.
Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
Answers
C.
Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
C.
Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
Answers
D.
Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
D.
Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
Answers
Suggested answer: D

Explanation:


A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error.

What can the Database Specialist do to resolve this error? (Choose two.)

A.
Change the table to use Amazon DynamoDB Streams
A.
Change the table to use Amazon DynamoDB Streams
Answers
B.
Purchase DynamoDB reserved capacity in the affected Region
B.
Purchase DynamoDB reserved capacity in the affected Region
Answers
C.
Increase the write capacity units for the specific table
C.
Increase the write capacity units for the specific table
Answers
D.
Change the table capacity mode to on-demand
D.
Change the table capacity mode to on-demand
Answers
E.
Change the table type to throughput optimized
E.
Change the table type to throughput optimized
Answers
Suggested answer: C, D

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/switching.capacitymode.html

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.

Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

A.
Grant least privilege to groups, users, and roles
A.
Grant least privilege to groups, users, and roles
Answers
B.
Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
B.
Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
Answers
C.
Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
C.
Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
Answers
D.
Use policy conditions to restrict access to selective IP addresses
D.
Use policy conditions to restrict access to selective IP addresses
Answers
E.
Use AccessList Controls policy type to restrict users for database instance deletion
E.
Use AccessList Controls policy type to restrict users for database instance deletion
Answers
F.
Enable AWS CloudTrail logging and Enhanced Monitoring
F.
Enable AWS CloudTrail logging and Enhanced Monitoring
Answers
Suggested answer: A, C, D

Explanation:


https://aws.amazon.com/blogs/database/using-iam-multifactor-authentication-with-amazon-rds/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_id-based-policyhtml

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/DataDurability.html

A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors.

Which of the following will resolve this issue?

A.
Edit the my.cnf file for the DB cluster to increase max_connections
A.
Edit the my.cnf file for the DB cluster to increase max_connections
Answers
B.
Increase the instance size of the DB cluster
B.
Increase the instance size of the DB cluster
Answers
C.
Change the DB cluster to Multi-AZ
C.
Change the DB cluster to Multi-AZ
Answers
D.
Increase the number of Aurora Replicas
D.
Increase the number of Aurora Replicas
Answers
Suggested answer: B

Explanation:


Max_connection is a formula in RDS parameter group:

GREATEST({log(DBInstanceClassMemory/805306368)*45},{log(DBInstanceClassMemory/818728140 8)*1000})

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Performance.html

You can increase the maximum number of connections to your Aurora MySQL DB instance by scaling the instance up to a DB instance class with more memory, or by setting a larger value for the max_connections parameter in the DB parameter group for your instance, up to 16,000. You must change a larger value for the max_connections parameter in the DB parameter group, not edit my.cnf, it is not physical server hosting MySQL.

Total 321 questions
Go to page: of 33