ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

Related questions











A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account.

Which strategy meets these requirements with the LEAST amount of administrative work?

A.
Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.
A.
Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.
Answers
B.
Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.
B.
Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.
Answers
C.
Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.
C.
Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.
Answers
D.
Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.
D.
Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-cross-account-migration/

https://aws.amazon.com/premiumsupport/knowledge-center/data-pipeline-account-accessdynamodb-s3/

A company uses the Amazon DynamoDB table contractDB in us-east-1 for its contract system with the following schema: orderID (primary key) timestamp (sort key) contract (map) createdBy (string) customerEmail (string) After a problem in production, the operations team has asked a database specialist to provide an IAM policy to read items from the database to debug the application. In addition, the developer is not allowed to access the value of the customerEmail field to stay compliant.

Which IAM policy should the database specialist use to achieve these requirements?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: A

Explanation:


A company has an application that uses an Amazon DynamoDB table to store user dat a. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time.

A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table’s provisioned read capacity units (RCUs) are being used.

What should the database specialist do?

A.
Enable auto scaling for the DynamoDB table.
A.
Enable auto scaling for the DynamoDB table.
Answers
B.
Use four threads and parallel DynamoDB API Scan operations.
B.
Use four threads and parallel DynamoDB API Scan operations.
Answers
C.
Double the table’s provisioned RCUs.
C.
Double the table’s provisioned RCUs.
Answers
D.
Set the Limit and Offset parameters before every call to the API.
D.
Set the Limit and Offset parameters before every call to the API.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.

Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls. How should a database specialist fix this issue?

A.
Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.
A.
Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.
Answers
B.
Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.
B.
Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.
Answers
C.
Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.
C.
Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.
Answers
D.
Add a ConditionExpression parameter in the PutItem request.
D.
Add a ConditionExpression parameter in the PutItem request.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.

Which solution meets these requirements?

A.
Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
A.
Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
Answers
B.
Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
B.
Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
Answers
C.
Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
C.
Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
Answers
D.
Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
D.
Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html

A company developed a new application that is deployed on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances use the security group named sg-application-servers.

The company needs a database to store the data from the application and decides to use an Amazon RDS for MySQL DB instance. The DB instance is deployed in private DB subnet.

What is the MOST restrictive configuration for the DB instance security group?

A.
Only allow incoming traffic from the sg-application-servers security group on port 3306.
A.
Only allow incoming traffic from the sg-application-servers security group on port 3306.
Answers
B.
Only allow incoming traffic from the sg-application-servers security group on port 443.
B.
Only allow incoming traffic from the sg-application-servers security group on port 443.
Answers
C.
Only allow incoming traffic from the subnet of the application servers on port 3306.
C.
Only allow incoming traffic from the subnet of the application servers on port 3306.
Answers
D.
Only allow incoming traffic from the subnet of the application servers on port 443.
D.
Only allow incoming traffic from the subnet of the application servers on port 443.
Answers
Suggested answer: A

Explanation:

most restrictive approach is to allow only incoming connections from SG of EC2 instance on port 3306

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance.

The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

A.
Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
A.
Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Answers
B.
Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
B.
Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Answers
C.
Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
C.
Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Answers
D.
Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
D.
Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Answers
Suggested answer: C

Explanation:


"AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS storage services, and also between AWS storage services."

https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.

Which approach meets these requirements?

A.
Set the max_connections parameter to 16,000 in the instance-level parameter group.
A.
Set the max_connections parameter to 16,000 in the instance-level parameter group.
Answers
B.
Modify the client connection timeout to 300 seconds.
B.
Modify the client connection timeout to 300 seconds.
Answers
C.
Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
C.
Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
Answers
D.
Enable the query cache at the instance level.
D.
Enable the query cache at the instance level.
Answers
Suggested answer: C

Explanation:

Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66% and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

https://aws.amazon.com/rds/proxy/

A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company’s application team needs to encrypt the DB instance.

What should the team do to meet this requirement?

A.
Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.
A.
Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.
Answers
B.
Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
B.
Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
Answers
C.
Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
C.
Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
Answers
D.
Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
D.
Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
Answers
Suggested answer: C

Explanation:


A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.

Which backup methodology should the database specialist use to MINIMIZE management overhead?

A.
Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.
A.
Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.
Answers
B.
Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.
B.
Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.
Answers
C.
Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.
C.
Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.
Answers
D.
Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.
D.
Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackup.html#:~:text=If%20you%20don%27t%20want%20to%20create%20scheduling%20scripts%20and%20cleanup%20jobs%2C%20you%20can%20use% 20AWS%20Backup%20to%20create%20backup%20plans%20with%20schedules%20and%20retention%20policies%20for%20your%20DynamoDB%20tables.%20AWS%20Backup%20runs%20the%20backups%20and%20deletes%20them %20when%20they%20expire.%20For%20more%20information%2C%20see%20the%20AWS%20Backup%20Developer%20Guide.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/backuprestore_HowItWorks.html

Total 321 questions
Go to page: of 33