ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 18

Question list
Search
Search

List of questions

Search

Related questions











A huge gaming firm is developing a centralized method for storing the status of various online games' user sessions. The workload requires low-latency key-value storage and will consist of an equal number of reads and writes. Across the games' geographically dispersed user base, data should be written to the AWS Region nearest to the user. The design should reduce the burden associated with managing data replication across Regions.

Which solution satisfies these criteria?

A.
Amazon RDS for MySQL with multi-Region read replicas
A.
Amazon RDS for MySQL with multi-Region read replicas
Answers
B.
Amazon Aurora global database
B.
Amazon Aurora global database
Answers
C.
Amazon RDS for Oracle with GoldenGate
C.
Amazon RDS for Oracle with GoldenGate
Answers
D.
Amazon DynamoDB global tables
D.
Amazon DynamoDB global tables
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/dynamodb/?nc1=h_ls

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

A.
Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.
A.
Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.
Answers
B.
Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.
B.
Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.
Answers
C.
Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.
C.
Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.
Answers
D.
Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.
D.
Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.
Answers
Suggested answer: B

Explanation:


RTO is 2 hours. With 3 TB database, cross-region replica is a better option

A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.

A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.

Which solution will satisfy these criteria?

A.
Defer the maintenance update until the sales event is over.
A.
Defer the maintenance update until the sales event is over.
Answers
B.
Create a read replica with the latest update. Initiate a failover before the sales event.
B.
Create a read replica with the latest update. Initiate a failover before the sales event.
Answers
C.
Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.
C.
Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.
Answers
D.
Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.
D.
Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/

A corporation intends to migrate a 500-GB Oracle database to Amazon Aurora PostgreSQL utilizing the AWS Schema Conversion Tool (AWS SCT) and AWS Data Management Service (AWS DMS). The database does not have any stored procedures, but does contain several huge or partitioned tables.

Because the program is vital to the company, it is preferable to migrate with little downtime.

Which measures should a database professional perform in combination to expedite the transfer process? (Select three.)

A.
Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.
A.
Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.
Answers
B.
For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.
B.
For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.
Answers
C.
For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.
C.
For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.
Answers
D.
Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.
D.
Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.
Answers
E.
Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.
E.
Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.
Answers
F.
Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.
F.
Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.
Answers
Suggested answer: C, D, E

Explanation:


A business need a data warehouse system that stores data consistently and in a highly organized fashion. The organization demands rapid response times for end-user inquiries including current-year data, and users must have access to the whole 15-year dataset when necessary. Additionally, this solution must be able to manage a variable volume of incoming inquiries. Costs associated with storing the 100 TB of data must be maintained to a minimum.

Which solution satisfies these criteria?

A.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
A.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
Answers
B.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
B.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
Answers
C.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
C.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
Answers
D.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
D.
Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html

"With the Concurrency Scaling feature, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries. Write operations continue as normal on your main cluster. Users always see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster. You're charged for concurrency scaling clusters only for the time they're in use. For more information about pricing, see Amazon Redshift pricing. You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in line."

A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.

Which activities would increase the pace of data migration? (Select three.)

A.
Create multiple AWS DMS tasks to migrate the large table.
A.
Create multiple AWS DMS tasks to migrate the large table.
Answers
B.
Configure the AWS DMS replication instance with Multi-AZ.
B.
Configure the AWS DMS replication instance with Multi-AZ.
Answers
C.
Increase the capacity of the AWS DMS replication server.
C.
Increase the capacity of the AWS DMS replication server.
Answers
D.
Establish an AWS Direct Connect connection between the on-premises data center and AWS.
D.
Establish an AWS Direct Connect connection between the on-premises data center and AWS.
Answers
E.
Enable an Amazon RDS Multi-AZ configuration.
E.
Enable an Amazon RDS Multi-AZ configuration.
Answers
F.
Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
F.
Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
Answers
Suggested answer: A, C, D

Explanation:


https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.LOBSupport.html

An internet advertising firm stores its data in an Amazon DynamoDb table. Amazon DynamoDB Streams are enabled on the table, and one of the keys has a global secondary index. The table is encrypted using a customer-managed AWS Key Management Service (AWS KMS) key.

The firm has chosen to grow worldwide and want to duplicate the database using DynamoDB global tables in a new AWS Region.

An administrator observes the following upon review:

No role with the dynamodb: CreateGlobalTable permission exists in the account.

An empty table with the same name exists in the new Region where replication is desired.

A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

Which settings will prevent you from creating a global table or replica in the new Region? (Select two.)

A.
A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
A.
A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
Answers
B.
An empty table with the same name exists in the Region where replication is desired.
B.
An empty table with the same name exists in the Region where replication is desired.
Answers
C.
No role with the dynamodb:CreateGlobalTable permission exists in the account.
C.
No role with the dynamodb:CreateGlobalTable permission exists in the account.
Answers
D.
DynamoDB Streams is enabled for the table.
D.
DynamoDB Streams is enabled for the table.
Answers
E.
The table is encrypted using a KMS customer managed key.
E.
The table is encrypted using a KMS customer managed key.
Answers
Suggested answer: A, B

Explanation:


In North America, a business launched a mobile game that swiftly expanded to 10 million daily active players. The game's backend is hosted on AWS and makes considerable use of a TTL-configured Amazon DynamoDB table.

When an item is added or changed, its TTL is set to 600 seconds plus the current epoch time. The game logic is reliant on the purging of outdated data in order to compute rewards points properly. At times, items from the table are read that are many hours beyond their TTL expiration.

How should a database administrator resolve this issue?

A.
Use a client library that supports the TTL functionality for DynamoDB.
A.
Use a client library that supports the TTL functionality for DynamoDB.
Answers
B.
Include a query filter expression to ignore items with an expired TTL.
B.
Include a query filter expression to ignore items with an expired TTL.
Answers
C.
Set the ConsistentRead parameter to true when querying the table.
C.
Set the ConsistentRead parameter to true when querying the table.
Answers
D.
Create a local secondary index on the TTL attribute.
D.
Create a local secondary index on the TTL attribute.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.

Which technique will satisfy these criteria with the LEAST amount of operational overhead?

A.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
A.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
Answers
B.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
B.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
Answers
C.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
C.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
Answers
D.
Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.
D.
Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.
Answers
Suggested answer: C

Explanation:


A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and Indi a. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

A.
Amazon DocumentDB
A.
Amazon DocumentDB
Answers
B.
Amazon RDS Multi-AZ deployment
B.
Amazon RDS Multi-AZ deployment
Answers
C.
Amazon DynamoDB global table
C.
Amazon DynamoDB global table
Answers
D.
Amazon Aurora Global Database
D.
Amazon Aurora Global Database
Answers
Suggested answer: C

Explanation:


Total 321 questions
Go to page: of 33