ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 27

Question list
Search
Search

List of questions

Search

Related questions











A company has an ecommerce website that runs on AWS. The website uses an Amazon RDS for MySQL database. A database specialist wants to enforce the use of temporary credentials to access the database.

Which solution will meet this requirement?

A.
Use MySQL native database authentication.
A.
Use MySQL native database authentication.
Answers
B.
Use AWS Secrets Manager to rotate the credentials.
B.
Use AWS Secrets Manager to rotate the credentials.
Answers
C.
Use AWS Identity and Access Management (IAM) database authentication.
C.
Use AWS Identity and Access Management (IAM) database authentication.
Answers
D.
Use AWS Systems Manager Parameter Store for authentication.
D.
Use AWS Systems Manager Parameter Store for authentication.
Answers
Suggested answer: C

Explanation:


A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

A.
Migrate the database to a new Amazon Redshift data warehouse.
A.
Migrate the database to a new Amazon Redshift data warehouse.
Answers
B.
Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.
B.
Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.
Answers
C.
Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.
C.
Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.
Answers
D.
Add an Aurora read replica.
D.
Add an Aurora read replica.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.BestPractices.html

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances.

The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture. Which solution will meet these requirements?

A.
Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.
A.
Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.
Answers
B.
Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.
B.
Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.
Answers
C.
Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.
C.
Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.
Answers
D.
Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.
D.
Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/ec2-fci.html

An FCI is generally preferable over an Always on availability group when: You’re using SQL Server Standard edition instead of Enterprise edition.

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

A.
Increase the number of tables that are loaded in parallel.
A.
Increase the number of tables that are loaded in parallel.
Answers
B.
Drop all indexes on the source tables.
B.
Drop all indexes on the source tables.
Answers
C.
Change the processing mode from the batch optimized apply option to transactional mode.
C.
Change the processing mode from the batch optimized apply option to transactional mode.
Answers
D.
Enable Multi-AZ on the target database while the full load task is in progress.
D.
Enable Multi-AZ on the target database while the full load task is in progress.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance

For a full load task, we recommend that you drop primary key indexes, secondary indexes, referential integrity constraints, and data manipulation language (DML) triggers. Or you can delay their creation until after the full load tasks are complete. You don't need indexes during a full load task, and indexes incur maintenance overhead if they are present. Because the full load task loads groups of tables at a time, referential integrity constraints are violated. Similarly, insert, update, and delete triggers can cause errors, for example if a row insert is triggered for a previously bulk loaded table. Other types of triggers also affect performance due to added processing.

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html

A finance company migrated its 3 ?¢?’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.

Which solution will meet these requirements?

A.
Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.
A.
Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.
Answers
B.
Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
B.
Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
Answers
C.
Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.
C.
Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.
Answers
D.
Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
D.
Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

A.
Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.
A.
Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.
Answers
B.
Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.
B.
Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.
Answers
C.
Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.
C.
Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.
Answers
D.
Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.
D.
Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.
Answers
Suggested answer: A

Explanation:


A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ?¢?’ of storage space with Provisioned IOPS.

Amazon CloudWatch metrics show that the average disk queue depth is greater than 200 and that the disk I/O response time is significantly higher than usual.

What should the database specialist do to improve the performance of the application immediately?

A.
Increase the Provisioned IOPS rate on the storage.
A.
Increase the Provisioned IOPS rate on the storage.
Answers
B.
Increase the available storage space.
B.
Increase the available storage space.
Answers
C.
Use General Purpose SSD (gp2) storage with burst credits.
C.
Use General Purpose SSD (gp2) storage with burst credits.
Answers
D.
Create a read replica to offload Read IOPS from the DB instance.
D.
Create a read replica to offload Read IOPS from the DB instance.
Answers
Suggested answer: A

Explanation:


A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.

What is the MOST operationally efficient way to restore the default permissions of the master user?

A.
Modify the DB instance and set a new master user password.
A.
Modify the DB instance and set a new master user password.
Answers
B.
Use AWS Secrets Manager to modify the master user password and restart the DB instance.
B.
Use AWS Secrets Manager to modify the master user password and restart the DB instance.
Answers
C.
Create a new master user for the DB instance.
C.
Create a new master user for the DB instance.
Answers
D.
Review the IAM user that owns the DB instance, and add missing permissions.
D.
Review the IAM user that owns the DB instance, and add missing permissions.
Answers
Suggested answer: A

Explanation:


An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3 How can a database specialist activate logging on the database?

A.
Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
A.
Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
Answers
B.
Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
B.
Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
Answers
C.
Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.
C.
Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.
Answers
D.
Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.
D.
Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/about-aws/whats-new/2021/04/you-now-can-use-aws-cloudtrail-to-logamazon-dynamodb-streams-da/

A vehicle insurance company needs to choose a highly available database to track vehicle owners and their insurance details. The persisted data should be immutable in the database, including the complete and sequenced history of changes over time with all the owners and insurance transfer details for a vehicle.

The data should be easily verifiable for the data lineage of an insurance claim.

Which approach meets these requirements with MINIMAL effort?

A.
Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.
A.
Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.
Answers
B.
Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
B.
Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
Answers
C.
Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.
C.
Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.
Answers
D.
Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
D.
Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
Answers
Suggested answer: C

Explanation:


Total 321 questions
Go to page: of 33