ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 31

Question list
Search
Search

List of questions

Search

Related questions











A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.

What is the recommended strategy for this use case?

A.
Use ElastiCache for Memcached with write-through and long time to live (TTL)
A.
Use ElastiCache for Memcached with write-through and long time to live (TTL)
Answers
B.
Use ElastiCache for Redis with lazy loading and short time to live (TTL)
B.
Use ElastiCache for Redis with lazy loading and short time to live (TTL)
Answers
C.
Use ElastiCache for Memcached with lazy loading and short time to live (TTL)
C.
Use ElastiCache for Memcached with lazy loading and short time to live (TTL)
Answers
D.
Use ElastiCache for Redis with write-through and long time to live (TTL)
D.
Use ElastiCache for Redis with write-through and long time to live (TTL)
Answers
Suggested answer: A

Explanation:

The recommended strategy for this use case is option A: use ElastiCache for Memcached with write-through and long time to live (TTL).

Amazon ElastiCache is a fully managed in-memory data store service that supports two open source engines: Memcached and Redis. Amazon ElastiCache can be used to improve the performance and scalability of web applications by caching frequently accessed data in memory, reducing the load and latency of database queries.

Memcached and Redis have different features and use cases. Memcached is a simple, high-performance, distributed caching system that supports a large number of concurrent connections and large object sizes. Redis is an advanced, feature-rich, in-memory data structure store that supports data persistence, replication, transactions, pub/sub, Lua scripting, and various data types.

For this use case, Memcached is more suitable than Redis because the news website does not need the advanced features of Redis, such as data persistence or replication. The news website only needs a fast and simple caching solution that can handle high traffic and large objects.

Write-through and lazy loading are two common caching strategies that determine when and how data is written to the cache. Write-through is a strategy that writes data to the cache whenever it is written to the database. Lazy loading is a strategy that writes data to the cache only when it is requested for the first time.

For this use case, write-through is more suitable than lazy loading because the news website needs to improve its page load time, especially during very popular news releases. Write-through ensures that the cache always has the most up-to-date data and avoids cache misses or stale data. Lazy loading may cause cache misses or stale data if the data is not cached or updated in time.

Time to live (TTL) is a parameter that specifies how long an item can remain in the cache before it expires and is deleted. TTL can be used to control the cache size and freshness.

For this use case, long TTL is more suitable than short TTL because the news website has a low probability of changing its data once a news page is published. Long TTL allows the data to stay in the cache longer and reduces the frequency of cache updates or evictions. Short TTL may cause unnecessary cache updates or evictions if the data does not change frequently.

Therefore, option A is the recommended strategy for this use case because it uses ElastiCache for Memcached with write-through and long TTL, which provides a fast and simple caching solution that can handle high traffic and large objects, and ensures that the cache always has the most up-to-date and relevant data.

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.

The lead developer created a single DynamoDB table for the events with the following schema:

Partition key: game name

Sort key: event identifier

Local secondary index: player identifier

Event time

The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

Which design change should a database specialist recommend to the development team?

A.
Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
A.
Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
Answers
B.
Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
B.
Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
Answers
C.
Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
C.
Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
Answers
D.
Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
D.
Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
Answers
Suggested answer: D

Explanation:

The correct answer is D. Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

The explanation is as follows:

The ItemCollectionSizeLimitExceededException error occurs when an item collection exceeds the 10 GB limit1. An item collection is a group of items that have the same partition key value but different sort key values2. In this case, the item collection is based on the game name, which has only three possible values. This means that all events for each game are stored in the same item collection, which can easily exceed the 10 GB limit given the high volume and size of events.

To avoid this error, a database specialist should recommend a design change that distributes the events across more partitions and reduces the size of each item collection. Option D achieves this by creating one table for each game, and using the player identifier as the partition key. This way, each event is stored in a separate partition based on the player identifier, and sorted by the event time. This design also supports efficient queries by game, player, and time range.

Option A is incorrect because it still uses a single table for all events, which can cause hot partitions and throttling due to uneven access patterns across games. Also, using the player identifier as the partition key can result in many small partitions that are underutilized and waste provisioned capacity. Adding a global secondary index with the game name as the partition key and the event time as the sort key does not solve the problem of item collection size limit, because global secondary indexes have their own item collections that are subject to the same limit3.

Option B is incorrect because it creates two tables with redundant data and increases storage costs. Also, using the game name as the partition key in both tables does not solve the problem of item collection size limit, as explained above.

Option C is incorrect because it still uses a single table for all events, which can cause hot partitions and throttling due to uneven access patterns across games. Also, replacing the sort key with a compound value consisting of the player identifier collated with the event time does not reduce the size of each item collection, because each event still has a unique sort key value. Adding a local secondary index with the player identifier as the sort key does not solve the problem of item collection size limit, because local secondary indexes share the same item collections as their base table4.

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

A.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
A.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
Answers
B.
Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
B.
Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
Answers
C.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_
C.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_
Answers
D.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports
D.
Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports
Answers
Suggested answer: A

Explanation:

The correct answer is A. Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432. The explanation is as follows: To allow the reporting application to access the production database, the database administrator needs to configure the security group rules for both the database and the EC2 instance. The security group rules must allow traffic between the peered VPCs on the port that the database uses, which is 5432 for PostgreSQL1. Option A is correct because it adds an inbound rule to the database security group that allows access from the developer account VPC CIDR on port 5432. This means that the database can accept connections from the EC2 instance in the peered VPC. It also adds an outbound rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432. This means that the EC2 instance can initiate connections to the database in the peered VPC. Option B is incorrect because it adds an outbound rule to the database security group, which is not necessary. The database does not need to initiate connections to the EC2 instance, only accept them. It also does not add an inbound rule to the EC2 security group, which is not required. The EC2 instance does not need to accept connections from the database, only initiate them. Option C is incorrect because it adds an inbound rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. This is too permissive and violates the principle of least privilege2. It also adds an inbound rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432. This is unnecessary and does not help with connectivity. Option D is incorrect because it adds an outbound rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports. This is too permissive and violates the principle of least privilege2. It also does not add an outbound rule to the database security group, which is not needed.


A company has an application environment that deploys Amazon Aurora PostgreSQL databases as part of its CI/CD process that uses AWS CloudFormatlon. The company's database administrator has received reports of performance Issues from the resulting database but has no way to investigate the issues.

Which combination of changes must the database administrator make to the database deployment to automate the collection of performance data? (Select TWO.)

A.
Turn on Amazon DevOps Guru for the Aurora database resources in the CloudFormat10n template.
A.
Turn on Amazon DevOps Guru for the Aurora database resources in the CloudFormat10n template.
Answers
B.
Turn on AWS CloudTraiI in each AWS account_
B.
Turn on AWS CloudTraiI in each AWS account_
Answers
C.
Turn on and contigure AWS Config tor all Aurora PostgreSQL databases.
C.
Turn on and contigure AWS Config tor all Aurora PostgreSQL databases.
Answers
D.
Update the CloudFormatlon template to enable Amazon CloudWatch monitoring on the Aurora PostgreSQL DB instances.
D.
Update the CloudFormatlon template to enable Amazon CloudWatch monitoring on the Aurora PostgreSQL DB instances.
Answers
E.
Update the CloudFormatlon template to turn on Performance Insights for Aurora PostgreSQL.
E.
Update the CloudFormatlon template to turn on Performance Insights for Aurora PostgreSQL.
Answers
Suggested answer: D, E

A database specialist needs to move an Amazon ROS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

A.
Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.
A.
Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.
Answers
B.
Create a DB snapshot of the DB instance. Share the snapshot With the destination AWS account Create a new DB instance by restoring the snapshot in the destination AWS account
B.
Create a DB snapshot of the DB instance. Share the snapshot With the destination AWS account Create a new DB instance by restoring the snapshot in the destination AWS account
Answers
C.
Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DB instance in the source AWS account. use the read replica to replicate the data into the DB instance in the destination AWS account
C.
Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DB instance in the source AWS account. use the read replica to replicate the data into the DB instance in the destination AWS account
Answers
D.
Use AWS DataSync to back up the DB instance in the source AWS account Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.
D.
Use AWS DataSync to back up the DB instance in the source AWS account Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.
Answers
Suggested answer: B

Explanation:

Option B is correct because it is the simplest and fastest way to migrate an Amazon RDS DB instance to another AWS account.Creating a DB snapshot of the DB instance captures the data and configuration of the DB instance at a point in time1.Sharing the snapshot with the destination AWS account allows the other account to access and restore the snapshot2.Creating a new DB instance by restoring the snapshot in the destination AWS account creates a copy of the original DB instance with the same data and configuration3. This solution requires minimal operational effort and downtime.

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

A.
Amazon Aurora MySQL Multi-AZ DB cluster
A.
Amazon Aurora MySQL Multi-AZ DB cluster
Answers
B.
Amazon Keyspaces (for Apache Cassandra)
B.
Amazon Keyspaces (for Apache Cassandra)
Answers
C.
Amazon DynamoDB table with DynamoDB auto scaling
C.
Amazon DynamoDB table with DynamoDB auto scaling
Answers
D.
Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone
D.
Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone
Answers
Suggested answer: C

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?

A.
Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.
A.
Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.
Answers
B.
Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway
B.
Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway
Answers
C.
Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances
C.
Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances
Answers
D.
Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.
D.
Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.
Answers
Suggested answer: D

Explanation:

Option D is correct because it meets the requirements of securely accessing a DynamoDB table from EC2 instances in a hybrid environment.A VPC endpoint for DynamoDB enables EC2 instances in a VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet1. The EC2 instances do not require public IP addresses, and do not need an internet gateway, a NAT device, or a virtual private gateway in the VPC. The endpoint policy and the security groups of the EC2 instances can control access to DynamoDB. Traffic between the VPC and DynamoDB does not leave the Amazon network. Assigning the endpoint to the route table of the private subnets that contain the EC2 instances ensures that any requests to DynamoDB from those subnets are routed to the private endpoint within the Amazon network.

A web-based application uses Amazon DocumentDB (with MongoDB compatibility) as its underlying data store. Sufficient access control IS in place, but a database specialist wants to be able to review logs if the primary DocumentDB database is deleted

Which combination of steps Should the database specialist take to meet this requirement? (Select TWO_)

A.
Set the audit_logs cluster parameter to enabled
A.
Set the audit_logs cluster parameter to enabled
Answers
B.
Enable DocumentDB log export to Amazon CloudWatch Logs.
B.
Enable DocumentDB log export to Amazon CloudWatch Logs.
Answers
C.
Enable Enhanced Monitoring tor DocumentDB.
C.
Enable Enhanced Monitoring tor DocumentDB.
Answers
D.
Enable AWS CloudTrail for DocumentDB.
D.
Enable AWS CloudTrail for DocumentDB.
Answers
E.
use AWS Config to monitor the state of DocumentDB.
E.
use AWS Config to monitor the state of DocumentDB.
Answers
Suggested answer: A, B

Explanation:

Option A is correct because it sets the audit_logs cluster parameter to enabled.This enables auditing on the DocumentDB cluster, which records events that were performed in the cluster, such as successful and failed authentication attempts, dropping a collection in a database, or creating an index1. By enabling auditing, the database specialist can review the logs to see who and when deleted the primary DocumentDB database, and what other actions were taken on the cluster.

Option B is correct because it enables DocumentDB log export to Amazon CloudWatch Logs.This allows the DocumentDB cluster to export its auditing records (JSON documents) to Amazon CloudWatch Logs, where they can be analyzed, monitored, and archived1. By enabling log export, the database specialist can access the logs even if the primary DocumentDB database is deleted, as they are stored in a separate service.

A company's application team needs to select an AWS managed database service to store application and user dat

a. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using an unique ID in each table. The company expects the traffic to be light at first, but the traffic Will Increase to thousands of transactions each second within the first year- The database service must support active reads and writes in multiple AWS Regions at the same time_ Query response times need to be less than 100 ms Which AWS database solution will meet these requirements?

A.
Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication
A.
Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication
Answers
B.
Deploy an Amazon Aurora MySOL global database with write forwarding turned on
B.
Deploy an Amazon Aurora MySOL global database with write forwarding turned on
Answers
C.
Deploy an Amazon DynamoDB database with global tables
C.
Deploy an Amazon DynamoDB database with global tables
Answers
D.
Deploy an Amazon DocumentDB global cluster across multiple Regions.
D.
Deploy an Amazon DocumentDB global cluster across multiple Regions.
Answers
Suggested answer: C

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

A.
Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.
A.
Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.
Answers
B.
Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
B.
Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
Answers
C.
use on-demand capacity mode tor the DynamoDB table.
C.
use on-demand capacity mode tor the DynamoDB table.
Answers
D.
use DynamoDB Accelerator (DAX).
D.
use DynamoDB Accelerator (DAX).
Answers
Suggested answer: C

Explanation:

Option C is correct because it solves the throttling issue without requiring changes to the app. On-demand capacity mode is a flexible billing option for DynamoDB that automatically accommodates your workload as it ramps up or down. With on-demand mode, you pay per request for the data reads and writes your application performs on your tables, and you don't need to specify how much read and write throughput you expect.On-demand mode can handle sudden increases and decreases in activity without throttling, as long as the request rate does not exceed the default table quotas1.To use on-demand mode, you only need to update the table settings in the AWS Management Console or by using the AWS SDK2. You don't need to modify any application logic, as on-demand mode is compatible with existing DynamoDB API calls.

Total 321 questions
Go to page: of 33