ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 20

Question list
Search
Search

List of questions

Search

Related questions











An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator (DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.

During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.

What is the MOST likely reason for this occurrence?

A.
A VPC endpoint was not added to access DynamoDB.
A.
A VPC endpoint was not added to access DynamoDB.
Answers
B.
Strongly consistent reads are always passed through DAX to DynamoDB.
B.
Strongly consistent reads are always passed through DAX to DynamoDB.
Answers
C.
DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
C.
DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
Answers
D.
A VPC endpoint was not added to access CloudWatch.
D.
A VPC endpoint was not added to access CloudWatch.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html

"If the request specifies strongly consistent reads, DAX passes the request through to DynamoDB.

The results from DynamoDB are not cached in DAX. Instead, they are simply returned to the application."

A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.

The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.

How can the business optimize cache behavior in order to boost application performance?

A.
Increase the size of the DAX cluster.
A.
Increase the size of the DAX cluster.
Answers
B.
Configure DAX to be an item cache with no query cache
B.
Configure DAX to be an item cache with no query cache
Answers
C.
Use eventually consistent reads instead of strongly consistent reads.
C.
Use eventually consistent reads instead of strongly consistent reads.
Answers
D.
Create a new DAX cluster with a higher TTL for the item cache.
D.
Create a new DAX cluster with a higher TTL for the item cache.
Answers
Suggested answer: C

Explanation:


A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.

Which method of data loading is the FASTEST?

A.
Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.
A.
Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.
Answers
B.
Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.
B.
Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.
Answers
C.
Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.
C.
Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.
Answers
D.
Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.
D.
Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.
Answers
Suggested answer: A

Explanation:


1.Copy the data files to an Amazon Simple Storage Service (Amazon S3) bucket.

2. Create an IAM role with Read and List access to the bucket.

3. Create an Amazon S3 VPC endpoint.

4. Start the Neptune loader by sending a request via HTTP to the Neptune DB instance.

5. The Neptune DB instance assumes the IAM role to load the data from the bucket.

A single MySQL database was moved to Amazon Aurora by a business. The production data is stored in a database cluster in VPC PROD, whereas 12 testing environments are hosted in VPC TEST with the same AWS account. Testing has a negligible effect on the test dat a. The development team requires that each environment be updated nightly to ensure that each test database has daily production data.

Which migration strategy will be the quickest and least expensive to implement?

A.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
A.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Answers
B.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
B.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
Answers
C.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
C.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
Answers
D.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
D.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Answers
Suggested answer: A

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html

A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL.

Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.

Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?

A.
Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
A.
Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
Answers
B.
Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
B.
Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
Answers
C.
Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
C.
Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
Answers
D.
Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
D.
Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/blogs/database/optimizing-and-tuning-queries-in-amazon-rds-postgresqlbased-on-native-and-external-tools/

"AWS recently released a feature called Amazon RDS Performance Insights, which provides an easy-to-understand dashboard for detecting performance problems in terms of load." "AWS recently released a feature called Amazon RDS Performance Insights, which provides an easy-to-understand dashboard for detecting performance problems in terms of load."

A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:

€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.

Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

A.
Check that Amazon S3 has an IAM role granting read access to Neptune
A.
Check that Amazon S3 has an IAM role granting read access to Neptune
Answers
B.
Check that an Amazon S3 VPC endpoint exists
B.
Check that an Amazon S3 VPC endpoint exists
Answers
C.
Check that a Neptune VPC endpoint exists
C.
Check that a Neptune VPC endpoint exists
Answers
D.
Check that Amazon EC2 has an IAM role granting read access to Amazon S3
D.
Check that Amazon EC2 has an IAM role granting read access to Amazon S3
Answers
E.
Check that Neptune has an IAM role granting read access to Amazon S3
E.
Check that Neptune has an IAM role granting read access to Amazon S3
Answers
Suggested answer: B, E

Explanation:


https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-IAM.html

https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-data.html

“An IAM role for the Neptune DB instance to assume that has an IAM policy that allows access to the data files in the S3 bucket. The policy must grant Read and List permissions.” “An Amazon S3 VPC endpoint. For more information, see the Creating an Amazon S3 VPC Endpoint section.”

A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Auror a. The dashboard should be able to read data within 100 milliseconds after an update.

The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.

Which solution satisfies these criteria?

A.
Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
A.
Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
Answers
B.
Provision a clone of the existing DB cluster for the new Application team.
B.
Provision a clone of the existing DB cluster for the new Application team.
Answers
C.
Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
C.
Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
Answers
D.
Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
D.
Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
Answers
Suggested answer: D

Explanation:


A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.

How should the database specialist improve the transfer of the database to AWS DMS?

A.
Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
A.
Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
Answers
B.
Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
B.
Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
Answers
C.
Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
C.
Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
Answers
D.
Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
D.
Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.LOBS,

"AWS DMS migrates LOB data in two phases:

1. AWS DMS creates a new row in the target table and populates the row with all data except the associated LOB value.

2.AWS DMS updates the row in the target table with the LOB data." This means that we would need two tasks, one per phase and use limited LOB mode for best performance.

The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster.

Which settings will result in the LEAST amount of downtime for the application during failover?

(Select three.)

A.
Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
A.
Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
Answers
B.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
B.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
Answers
C.
Edit and enable Aurora DB cluster cache management in parameter groups.
C.
Edit and enable Aurora DB cluster cache management in parameter groups.
Answers
D.
Set TCP keepalive parameters to a high value.
D.
Set TCP keepalive parameters to a high value.
Answers
E.
Set JDBC connection string timeout variables to a low value.
E.
Set JDBC connection string timeout variables to a low value.
Answers
F.
Set Java DNS caching timeouts to a high value.
F.
Set Java DNS caching timeouts to a high value.
Answers
Suggested answer: A, C, E

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.clustercache-mgmt.html

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html#AuroraPostgreSQL.BestPractices.FastFailover.TCPKeepalives

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine.

The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.

Which solution will meet these requirements?

A.
Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.
A.
Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.
Answers
B.
Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.
B.
Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.
Answers
C.
Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.
C.
Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.
Answers
D.
Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.
D.
Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/blogs/database/cross-region-disaster-recovery-of-amazon-rds-for-sqlserver/

Total 321 questions
Go to page: of 33