Amazon DBS-C01 Practice Test - Questions Answers, Page 20
List of questions
Question 191
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator (DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.
During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.
What is the MOST likely reason for this occurrence?
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html
"If the request specifies strongly consistent reads, DAX passes the request through to DynamoDB.
The results from DynamoDB are not cached in DAX. Instead, they are simply returned to the application."
Question 192
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.
The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.
How can the business optimize cache behavior in order to boost application performance?
Explanation:
Question 193
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.
Which method of data loading is the FASTEST?
Explanation:
1.Copy the data files to an Amazon Simple Storage Service (Amazon S3) bucket.
2. Create an IAM role with Read and List access to the bucket.
3. Create an Amazon S3 VPC endpoint.
4. Start the Neptune loader by sending a request via HTTP to the Neptune DB instance.
5. The Neptune DB instance assumes the IAM role to load the data from the bucket.
Question 194
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A single MySQL database was moved to Amazon Aurora by a business. The production data is stored in a database cluster in VPC PROD, whereas 12 testing environments are hosted in VPC TEST with the same AWS account. Testing has a negligible effect on the test dat a. The development team requires that each environment be updated nightly to ensure that each test database has daily production data.
Which migration strategy will be the quickest and least expensive to implement?
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html
Question 195
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL.
Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.
Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?
Explanation:
https://aws.amazon.com/blogs/database/optimizing-and-tuning-queries-in-amazon-rds-postgresqlbased-on-native-and-external-tools/
"AWS recently released a feature called Amazon RDS Performance Insights, which provides an easy-to-understand dashboard for detecting performance problems in terms of load." "AWS recently released a feature called Amazon RDS Performance Insights, which provides an easy-to-understand dashboard for detecting performance problems in terms of load."
Question 196
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:
€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.
Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)
Explanation:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-IAM.html
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-data.html
“An IAM role for the Neptune DB instance to assume that has an IAM policy that allows access to the data files in the S3 bucket. The policy must grant Read and List permissions.” “An Amazon S3 VPC endpoint. For more information, see the Creating an Amazon S3 VPC Endpoint section.”
Question 197
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Auror a. The dashboard should be able to read data within 100 milliseconds after an update.
The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.
Which solution satisfies these criteria?
Explanation:
Question 198
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.
How should the database specialist improve the transfer of the database to AWS DMS?
Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.LOBS,
"AWS DMS migrates LOB data in two phases:
1. AWS DMS creates a new row in the target table and populates the row with all data except the associated LOB value.
2.AWS DMS updates the row in the target table with the LOB data." This means that we would need two tasks, one per phase and use limited LOB mode for best performance.
Question 199
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster.
Which settings will result in the LEAST amount of downtime for the application during failover?
(Select three.)
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.clustercache-mgmt.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html#AuroraPostgreSQL.BestPractices.FastFailover.TCPKeepalives
Question 200
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine.
The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.
Which solution will meet these requirements?
Explanation:
https://aws.amazon.com/blogs/database/cross-region-disaster-recovery-of-amazon-rds-for-sqlserver/
Question