ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records.

The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

A.
Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows readonly flow from the application subnet to the database subnet.
A.
Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows readonly flow from the application subnet to the database subnet.
Answers
B.
Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.
B.
Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.
Answers
C.
Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.
C.
Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.
Answers
D.
Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.
D.
Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/qldb/latest/developerguide/vpc-endpoints.html

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

A.
Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.
A.
Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.
Answers
B.
Make a copy of the DB snapshot, and set the encryption option to disable.
B.
Make a copy of the DB snapshot, and set the encryption option to disable.
Answers
C.
Share the DB snapshot by setting the DB snapshot visibility option to public.
C.
Share the DB snapshot by setting the DB snapshot visibility option to public.
Answers
D.
Make a copy of the DB snapshot, and set the encryption option to enable.
D.
Make a copy of the DB snapshot, and set the encryption option to enable.
Answers
E.
Share the DB snapshot by using the default AWS KMS encryption key.
E.
Share the DB snapshot by using the default AWS KMS encryption key.
Answers
Suggested answer: A, D

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose *Bulk inserts through COPY commands from Amazon S3 *Analytics through SQL queries Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

A.
Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.
A.
Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.
Answers
B.
Stream real-time data into Redshift temporary tables before loading the data into permanent tables.
B.
Stream real-time data into Redshift temporary tables before loading the data into permanent tables.
Answers
C.
For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.
C.
For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.
Answers
D.
For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.
D.
For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.
Answers
E.
Optimize analytics SQL queries to use sort keys.
E.
Optimize analytics SQL queries to use sort keys.
Answers
F.
Avoid using temporary tables in analytics SQL queries.
F.
Avoid using temporary tables in analytics SQL queries.
Answers
Suggested answer: B, C, E

Explanation:


https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-techniques-for-amazonredshift/

Tip #6: Improving the efficiency of temporary tables Tip #9: Maintaining efficient data loads Amazon Redshift best practices suggest using the COPY command to perform data loads of file-based-data.

Tip #3: Sort key recommendation Sorting a table on an appropriate sort key can accelerate query performance, especially queries with range-restricted predicates, by requiring fewer table blocks to be read from disk.

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.

Which solution meets these requirements?

A.
Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).
A.
Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).
Answers
B.
Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2
B.
Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2
Answers
C.
Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).
C.
Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).
Answers
D.
Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.
D.
Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-migration.html#docdbmigration-approaches

A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.

What could be the reason for this?

A.
When stopping the DB instance, the option to create a snapshot should have been selected.
A.
When stopping the DB instance, the option to create a snapshot should have been selected.
Answers
B.
When stopping the DB instance, the duration for stopping the DB instance should have been selected.
B.
When stopping the DB instance, the duration for stopping the DB instance should have been selected.
Answers
C.
Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.
C.
Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.
Answers
D.
Stopped DB instances will automatically restart if the instance is not manually started after 7 days.
D.
Stopped DB instances will automatically restart if the instance is not manually started after 7 days.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-stop-seven-days/

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.

Which solution will meet these requirements in the MOST operationally efficient manner?

A.
Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data
A.
Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data
Answers
B.
Use Amazon Redshift for relational data and JSON data.
B.
Use Amazon Redshift for relational data and JSON data.
Answers
C.
Use Amazon RDS for relational data. Use Amazon Neptune for JSON data
C.
Use Amazon RDS for relational data. Use Amazon Neptune for JSON data
Answers
D.
Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.
D.
Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/redshift/latest/dg/super-overview.htm

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.

How should the database administrator remediate this issue at the lowest cost?

A.
Enable auto scaling and set the target usage rate to 90%.
A.
Enable auto scaling and set the target usage rate to 90%.
Answers
B.
Switch the table to provisioned mode and enable auto scaling.
B.
Switch the table to provisioned mode and enable auto scaling.
Answers
C.
Switch the table to provisioned mode and set the throughput to the peak value.
C.
Switch the table to provisioned mode and set the throughput to the peak value.
Answers
D.
Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.
D.
Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.
Answers
Suggested answer: B

Explanation:


A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to RDS Data API is private and never passes through the public internet. What should the database specialist do to meet this requirement?

A.
Modify the Aurora Serverless cluster by selecting a VPC with private subnets.
A.
Modify the Aurora Serverless cluster by selecting a VPC with private subnets.
Answers
B.
Modify the Aurora Serverless cluster by unchecking the publicly accessible option.
B.
Modify the Aurora Serverless cluster by unchecking the publicly accessible option.
Answers
C.
Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.
C.
Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.
Answers
D.
Create a gateway VPC endpoint for RDS Data API.
D.
Create a gateway VPC endpoint for RDS Data API.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/about-aws/whats-new/2020/02/amazon-rds-data-api-now-supports-awsprivatelink/

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.

Which solution will meet these requirements MOST cost-effectively?

A.
Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.
A.
Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.
Answers
B.
Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.
B.
Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.
Answers
C.
Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.
C.
Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.
Answers
D.
Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.
D.
Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/blogs/aws/fine-grained-access-control-for-amazon-dynamodb/

A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.

Which solution meets these requirements?

A.
Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.
A.
Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.
Answers
B.
Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.
B.
Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.
Answers
C.
Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.
C.
Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.
Answers
D.
Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.
D.
Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.
Answers
Suggested answer: B

Explanation:


Total 321 questions
Go to page: of 33