ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions











A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.

How should the Database Specialist optimize the database migration using AWS DMS?

A.
Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
A.
Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
Answers
B.
Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
B.
Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
Answers
C.
Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
C.
Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
Answers
D.
Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
D.
Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
Answers
Suggested answer: C

Explanation:


A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.

To prepare the new table with identical settings, which steps should be performed? (Choose two.)

A.
Re-create global secondary indexes in the new table
A.
Re-create global secondary indexes in the new table
Answers
B.
Define IAM policies for access to the new table
B.
Define IAM policies for access to the new table
Answers
C.
Define the TTL settings
C.
Define the TTL settings
Answers
D.
Encrypt the table from the AWS Management Console or use the update-table command
D.
Encrypt the table from the AWS Management Console or use the update-table command
Answers
E.
Set the provisioned read and write capacity
E.
Set the provisioned read and write capacity
Answers
Suggested answer: B, C

Explanation:


The following items need to be reconfigured after restoring the DynamoDB table.

--AutoScaling policy

--IAM policy

--CloudWatch settings

--Tags

--Stream settings

--TTL

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/backuprestore_HowItWorks.html

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.

Which process should the Database Specialist recommend to meet these requirements?

A.
Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
A.
Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
Answers
B.
Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
B.
Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
Answers
C.
Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
C.
Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
Answers
D.
Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
D.
Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
Answers
Suggested answer: A

Explanation:


https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.

What should the company do to address this space constraint issue?

A.
Log in to the host and run the rm $PGDATA/pg_logs/* command
A.
Log in to the host and run the rm $PGDATA/pg_logs/* command
Answers
B.
Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
B.
Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
Answers
C.
Create a ticket with AWS Support to have the logs deleted
C.
Create a ticket with AWS Support to have the logs deleted
Answers
D.
Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
D.
Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
Answers
Suggested answer: B

Explanation:


To set the retention period for system logs, use the rds.log_retention_period parameter. You can find rds.log_retention_period in the DB parameter group associated with your DB instance. The unit for this parameter is minutes. For example, a setting of 1,440 retains logs for one day. The default value is 4,320 (three days). The maximum value is 10,080 (seven days).

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

A.
Create an Amazon DynamoDB table with provisioned capacity mode
A.
Create an Amazon DynamoDB table with provisioned capacity mode
Answers
B.
Create an Amazon DocumentDB cluster
B.
Create an Amazon DocumentDB cluster
Answers
C.
Create an Amazon DynamoDB table with on-demand capacity mode
C.
Create an Amazon DynamoDB table with on-demand capacity mode
Answers
D.
Create an Amazon Aurora Serverless DB cluster
D.
Create an Amazon Aurora Serverless DB cluster
Answers
Suggested answer: C

Explanation:


Reference: https://aws.amazon.com/dynamodb/

Key-value database -> DynamoDB Capable of dealing with unexpected application traffic -> ondemand capacity mode A key-value database is a type of nonrelational database that uses a simple key-value method to store data. A key- value database stores data as a collection of key-value pairs in which a key serves as a unique identifier. On-demand mode is a good option to create new tables with unknown workloads.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand

A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.

Which solution meets these requirements?

A.
Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
A.
Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
Answers
B.
Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
B.
Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
Answers
C.
Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
C.
Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
Answers
D.
Use Amazon Neptune for storage
D.
Use Amazon Neptune for storage
Answers
Suggested answer: A

Explanation:


Reference: <https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tablesto-power- multiregion-architectures/>

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.

How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

A.
Set the TCP keepalive parameters low
A.
Set the TCP keepalive parameters low
Answers
B.
Call the AWS CLI failover-db-cluster command
B.
Call the AWS CLI failover-db-cluster command
Answers
C.
Enable Enhanced Monitoring on the DB cluster
C.
Enable Enhanced Monitoring on the DB cluster
Answers
D.
Start a database activity stream on the DB cluster
D.
Start a database activity stream on the DB cluster
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html#AuroraPostgreSQL.BestPractices.FastFailover.TCPKeepalives

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

A.
Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.
A.
Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.
Answers
B.
Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
B.
Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
Answers
C.
Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
C.
Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
Answers
D.
Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.
D.
Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/blogs/database/migrating-oracle-databases-with-near-zero-downtimeusing-aws-dms/

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

A.
Enable DocumentDB to export the logs to Amazon CloudWatch Logs
A.
Enable DocumentDB to export the logs to Amazon CloudWatch Logs
Answers
B.
Enable DocumentDB to export the logs to AWS CloudTrail
B.
Enable DocumentDB to export the logs to AWS CloudTrail
Answers
C.
Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
C.
Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
Answers
D.
Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3
D.
Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/documentdb/latest/developerguide/event-auditing.html

Auditing Amazon DocumentDB EventsPDFKindleRSSWith Amazon DocumentDB (with MongoDB compatibility), you can audit events that were performed in your cluster. Examples of logged events include successful and failed authentication attempts, dropping a collection in a database, or creating an index. By default, auditing is disabled on Amazon DocumentDB and requires that you opt in to use this feature.

When auditing is enabled, Amazon DocumentDB records Data Definition Language (DDL), authentication, authorization, and user management events to Amazon CloudWatch Logs. When auditing is enabled, Amazon DocumentDB exports your cluster’s auditing records (JSON documents) to Amazon CloudWatch Logs. You can use Amazon CloudWatch Logs to analyze, monitor, and archive your Amazon DocumentDB auditing events.

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?

A.
Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.
A.
Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.
Answers
B.
Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.
B.
Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.
Answers
C.
Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.
C.
Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.
Answers
D.
Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.
D.
Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.
Answers
Suggested answer: D

Explanation:


Reference: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Schema-Conversion- Tool.pdf

• Converts DB/DW schema from source to target (including procedures / views / secondary indexes / FK and constraints)

• Mainly for heterogeneous DB migrations and DW migrations

Total 321 questions
Go to page: of 33