ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 28

Question list
Search
Search

List of questions

Search

Related questions











A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.

The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.

Which combination of changes will meet these requirements? (Choose two.)

A.
Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.
A.
Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.
Answers
B.
Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.
B.
Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.
Answers
C.
Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.
C.
Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.
Answers
D.
Use parallel load with different data boundaries for larger tables.
D.
Use parallel load with different data boundaries for larger tables.
Answers
E.
Run the DMS tasks on a larger instance class. Increase local storage on the instance.
E.
Run the DMS tasks on a larger instance class. Increase local storage on the instance.
Answers
Suggested answer: B, D

Explanation:

Explanation from Amazon documents:AWS Database Migration Service (AWS DMS) is a service that helps you migrate data from one data source to another. AWS DMS supports full load and change data capture (CDC) modes, which enable you to migrate data with minimal downtime. AWS DMS also supports parallel load, which allows you to load data from multiple tables or partitions concurrently.To improve database performance, reduce data migration time, and create multiple DMS tasks, the database specialist should use the following combination of changes:Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value. This change will allow the database specialist to split the migration workload into smaller and more manageable units, and increase the parallelism of the full load process. The MaxFullLoadSubTasks parameter specifies the maximum number of tables that are loaded in parallel for each DMS task. By setting this parameter to a higher value, the database specialist can increase the throughput and performance of the full load process.Use parallel load with different data boundaries for larger tables. This change will allow the database specialist to divide the larger tables into smaller chunks based on a partition key or a range of values, and load them in parallel using multiple DMS tasks. Parallel load can significantly reduce the migration time and improve the performance of large tables.Therefore, option B and D are the correct combination of changes to meet the requirements. Option A is incorrect because increasing the value of the ParallelLoadThreads parameter in the DMS task settings for the tables will not improve the performance or reduce the migration time significantly. The ParallelLoadThreads parameter specifies the number of threads that are used to load data from a single table or partition. By increasing this parameter, the database specialist may increase the CPU utilization and network bandwidth consumption of the source and target databases, but not the parallelism of the full load process. Option C is incorrect because using a smaller set of tables with each DMS task and setting the MaxFullLoadSubTasks parameter to a lower value will decrease the parallelism and performance of the full load process. Option E is incorrect because running the DMS tasks on a larger instance class and increasing local storage on the instance will not address the root cause of the performance issue, which is the lack of parallelism and partitioning of the large tables.

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

A.
Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.
A.
Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.
Answers
B.
Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.
B.
Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.
Answers
C.
Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the ---storage-encrypted parameter set to true. Update the application to point to the new cluster.
C.
Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the ---storage-encrypted parameter set to true. Update the application to point to the new cluster.
Answers
D.
Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.
D.
Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.
Answers
E.
Activate encryption at rest using the modify-db-cluster command with the ---storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.
E.
Activate encryption at rest using the modify-db-cluster command with the ---storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.
Answers
Suggested answer: B, C

A database specialist is planning to migrate a 4 TB Microsoft SQL Server DB instance from on premises to Amazon RDS for SQL Server. The database is primarily used for nightly batch processing.

Which RDS storage option meets these requirements MOST cost-effectively?

A.
General Purpose SSD storage
A.
General Purpose SSD storage
Answers
B.
Provisioned IOPS storage
B.
Provisioned IOPS storage
Answers
C.
Magnetic storage
C.
Magnetic storage
Answers
D.
Throughput Optimized hard disk drives (HDD)
D.
Throughput Optimized hard disk drives (HDD)
Answers
Suggested answer: A

Explanation:

General Purpose SSD storage is a cost-effective storage option that is ideal for a broad range of workloads running on medium-sized DB instances1.General Purpose storage is best suited for development and testing environments1.Since the database is primarily used for nightly batch processing, it does not require high I/O performance or low latency that Provisioned IOPS storage offers12.Magnetic storage and Throughput Optimized HDD are not recommended for new storage needs, and they have lower storage limits than General Purpose SSD and Provisioned IOPS SSD1. Therefore, General Purpose SSD storage meets the requirements most cost-effectively.

An online retailer uses Amazon DynamoDB for its product catalog and order dat a. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.

What should a database specialist do to accommodate the changing requirements for DAX?

A.
Increase the number of nodes in the existing DAX cluster.
A.
Increase the number of nodes in the existing DAX cluster.
Answers
B.
Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.
B.
Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.
Answers
C.
Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.
C.
Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.
Answers
D.
Modify the node type in the existing DAX cluster.
D.
Modify the node type in the existing DAX cluster.
Answers
Suggested answer: C

Explanation:

Create a new DAX cluster using a larger node type. ChangeExplanation from Amazon documents:The cache miss rate is the percentage of read requests that are not satisfied by the DAX cache and have to be forwarded to DynamoDB1. A high cache miss rate indicates that the DAX cluster does not have enough memory to store all the frequently accessed items. Increasing the number of nodes in the existing DAX cluster (option A) or creating a new DAX cluster with more nodes (option B) will not increase the total memory available for caching, because DAX uses a partitioned cache model, where each node is responsible for caching a subset of the data2. Modifying the node type in the existing DAX cluster (option D) will cause downtime and data loss, because DAX does not support online resizing of clusters3. Therefore, the best option is to create a new DAX cluster using a larger node type (option C), which will provide more memory per node and allow more items to be cached. The application will need to change the DAX endpoint to point to the new cluster, which can be done with minimal disruption by using DNS aliasing or load balancing3.

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

A.
Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.
A.
Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.
Answers
B.
Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.
B.
Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.
Answers
C.
Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.
C.
Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.
Answers
D.
Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.
D.
Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.
Answers
Suggested answer: B

Explanation:

Answer:: BExplanation from Amazon documents:Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log of all your application changes. Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time1. This makes it ideal for storing historical records that need to be tamper-proof and auditable. Amazon QLDB also supports PartiQL, a SQL-compatible query language that lets you query data using familiar SQL operators2.Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) is a fully managed service that makes it easy to deploy, operate, and scale OpenSearch, an open source search and analytics engine. Amazon OpenSearch Service lets you store, search, and analyze large volumes of data quickly and at low cost3. You can use Amazon OpenSearch Service to index and query your employee record history data stored in Amazon QLDB using the QLDB Streams feature. This way, you can leverage the powerful search and analytics capabilities of OpenSearch Service to run fast and flexible queries on your historical data.Therefore, option B is the best solution that meets the requirements of preventing modification of the historical records and maximizing the speed of the queries. Option A is not suitable because DynamoDB is a key-value and document database that does not provide a ledger-like transaction log. Option C is not suitable because Aurora PostgreSQL is a relational database that does not guarantee immutability of the historical records. Option D is not suitable because Redshift is a data warehouse that is optimized for analytical queries on large datasets, not for storing and querying individual records.

An ecommerce company is running Amazon RDS for Microsoft SQL Server. The company is planning to perform testing in a development environment with production dat a. The development environment and the production environment are in separate AWS accounts. Both environments use AWS Key Management Service (AWS KMS) encrypted databases with both manual and automated snapshots. A database specialist needs to share a KMS encrypted production RDS snapshot with the development account.

Which combination of steps should the database specialist take to meet these requirements? (Select THREE.)

A.
Create an automated snapshot. Share the snapshot from the production account to the development account.
A.
Create an automated snapshot. Share the snapshot from the production account to the development account.
Answers
B.
Create a manual snapshot. Share the snapshot from the production account to the development account.
B.
Create a manual snapshot. Share the snapshot from the production account to the development account.
Answers
C.
Share the snapshot that is encrypted by using the development account default KMS encryption key.
C.
Share the snapshot that is encrypted by using the development account default KMS encryption key.
Answers
D.
Share the snapshot that is encrypted by using the production account custom KMS encryption key.
D.
Share the snapshot that is encrypted by using the production account custom KMS encryption key.
Answers
E.
Allow the development account to access the production account KMS encryption key.
E.
Allow the development account to access the production account KMS encryption key.
Answers
F.
Allow the production account to access the development account KMS encryption key.
F.
Allow the production account to access the development account KMS encryption key.
Answers
Suggested answer: B, D, E

Explanation:

Answer:: B, D, EExplanation from Amazon documents:To share an encrypted Amazon RDS snapshot with another account, you need to do the following123:Create a manual snapshot of the production database. You can't share an automated snapshot directly, but you can copy it to a manual snapshot and then share it1.Use a custom KMS encryption key for the manual snapshot. You can't share a snapshot that is encrypted using the default KMS key of the source account1.Share the snapshot with the development account by specifying the account ID of the target account1.Allow the development account to access the custom KMS key of the source account by adding the target account ID to the key policy of the source account2.Copy the shared snapshot to the development account by using a KMS key of the target account2.Therefore, option B, D, and E are the correct steps to meet the requirements. Option A is incorrect because you can't share an automated snapshot. Option C is incorrect because you can't share a snapshot that is encrypted using the default KMS key. Option F is unnecessary because the production account does not need to access the development account KMS key.

A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average.

What is the MOST cost-effective solution?

A.
Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.
A.
Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.
Answers
B.
Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.
B.
Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.
Answers
C.
Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and use Amazon Athena to query the data.
C.
Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and use Amazon Athena to query the data.
Answers
D.
Use Amazon DynamoDB with on-demand capacity mode. Switch the table class to DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).
D.
Use Amazon DynamoDB with on-demand capacity mode. Switch the table class to DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).
Answers
Suggested answer: C

Explanation:

Explanation from Amazon documents:Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is a storage class for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee1. S3 Standard-IA is designed for long-lived and infrequently accessed data. Examples include disaster recovery, backups, and long-term data retention1.Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run2. Athena scales automatically---executing queries in parallel---so results are fast, even with large datasets and complex queries2.The news portal can use S3 Standard-IA to store its metadata about posts and comments, which are not frequently looked up or updated. This way, the portal can benefit from the low storage cost of S3 Standard-IA ($0.0125 per GB per month) and the high durability and availability of S31. The portal can also use Athena to query the data stored in S3 using SQL, without having to set up any servers or databases. The portal only pays for the amount of data scanned by each query ($5 per TB scanned) and can optimize the query cost by partitioning, compressing, and converting the data into columnar formats2.Therefore, option C is the most cost-effective solution for the news portal's use case. Option A is not cost-effective because DynamoDB on-demand capacity mode charges for read and write requests ($1.25 per million read requests and $1.25 per million write requests), regardless of how frequently the data is accessed3. Purchasing reserved capacity can reduce the cost, but it requires a minimum commitment of 100 units per region. Option B is not suitable because ElastiCache for Redis is an in-memory data store that provides sub-millisecond latency, but it is more expensive than S3 Standard-IA ($0.046 per GB per hour for cache.t2.micro node type). ElastiCache for Redis is also not designed for long-term data storage, but for caching frequently accessed data. Option D is not available because DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) is not a valid table class for DynamoDB. The only table classes for DynamoDB are On-Demand and Provisioned.

A company is using AWS CloudFormation to provision and manage infrastructure resources, including a production database. During a recent CloudFormation stack update, a database specialist observed that changes were made to a database resource that is named ProductionDatabase. The company wants to prevent changes to only ProductionDatabase during future stack updates.

Which stack policy will meet this requirement?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: A

A company uses an Amazon RDS for PostgreSQL database in the us-east-2 Region. The company wants to have a copy of the database available in the us-west-2 Region as part of a new disaster recovery strategy.

A database architect needs to create the new database. There can be little to no downtime to the source database. The database architect has decided to use AWS Database Migration Service (AWS DMS) to replicate the database across Regions. The database architect will use full load mode and then will switch to change data capture (CDC) mode.

Which parameters must the database architect configure to support CDC mode for the RDS for PostgreSQL database? (Choose three.)

A.
Set wal_level = logical.
A.
Set wal_level = logical.
Answers
B.
Set wal_level = replica.
B.
Set wal_level = replica.
Answers
C.
Set max_replication_slots to 1 or more, depending on the number of DMS tasks.
C.
Set max_replication_slots to 1 or more, depending on the number of DMS tasks.
Answers
D.
Set max_replication_slots to 0 to support dynamic allocation of slots.
D.
Set max_replication_slots to 0 to support dynamic allocation of slots.
Answers
E.
Set wal_sender_timeout to 20,000 milliseconds.
E.
Set wal_sender_timeout to 20,000 milliseconds.
Answers
F.
Set wal_sender_timeout to 5,000 milliseconds.
F.
Set wal_sender_timeout to 5,000 milliseconds.
Answers
Suggested answer: A, C, E

Explanation:

Answer:: A, C, EExplanation from Amazon documents:To enable CDC mode for RDS for PostgreSQL database, the database architect needs to configure the following parameters12:Set wal_level = logical. This parameter determines how much information is written to the write-ahead log (WAL). For CDC mode, the wal_level must be set to logical, which enables logical decoding of the WAL and allows AWS DMS to read changes from the source database1.Set max_replication_slots to 1 or more, depending on the number of DMS tasks. This parameter specifies the maximum number of replication slots that the source database can support. A replication slot is a data structure that records the state of a replication stream. AWS DMS uses replication slots to set up logical replication and track changes in the source database. The max_replication_slots parameter must be equal to or greater than the number of DMS tasks that use CDC mode for the source database1.Set wal_sender_timeout to 20,000 milliseconds. This parameter specifies the amount of time that a WAL sender process waits for feedback from a WAL receiver process before terminating the connection. A WAL sender process is a background process that streams WAL data from the source database to AWS DMS. A WAL receiver process is a background process that receives WAL data from a WAL sender process and writes it to a local file. The wal_sender_timeout parameter must be set to a value greater than 10,000 milliseconds (10 seconds) to prevent connection timeouts during CDC mode2.Therefore, option A, C, and E are the correct parameters to support CDC mode for RDS for PostgreSQL database. Option B is incorrect because wal_level = replica is not sufficient for logical decoding and CDC mode. Option D is incorrect because max_replication_slots must be a positive integer, not zero. Option F is incorrect because wal_sender_timeout = 5,000 milliseconds is too low and may cause connection timeouts during CDC mode.

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multiple accounts. The company will initiate each DB instance from an existing Aurora PostgreSQL DB instance that runs in a shared account. The company wants the process to be repeatable in case the company adds additional accounts in the future. The company also wants to be able to verify if manual changes have been made to the DB instance configurations after the company deploys the DB instances.

A database specialist has determined that the company needs to create an AWS CloudFormation template with the necessary configuration to create a DB instance in an account by using a snapshot of the existing DB instance to initialize the DB instance. The company will also use the CloudFormation template's parameters to provide key values for the DB instance creation (account ID, etc.).

Which final step will meet these requirements in the MOST operationally efficient way?

A.
Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.
A.
Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.
Answers
B.
Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.
B.
Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.
Answers
C.
Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.
C.
Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.
Answers
D.
Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.
D.
Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.
Answers
Suggested answer: B
Total 321 questions
Go to page: of 33