ExamGecko
Home / Amazon / DBS-C01 / List of questions
Ask Question

Amazon DBS-C01 Practice Test - Questions Answers, Page 9

List of questions

Question 81

Report
Export
Collapse

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository.

The company also needs to meet compliance requirement by routinely rotating its database master password for production.

What is most secure solution to store the master password?

Store the master password in a parameter file in each environment. Reference the environment specific parameter file in the CloudFormation template.
Store the master password in a parameter file in each environment. Reference the environment specific parameter file in the CloudFormation template.
Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
Suggested answer: C

Explanation:

"By using the secure string support in CloudFormation with dynamic references you can better maintain your infrastructure as code. You’ll be able to avoid hard coding passwords into your templates and you can keep these runtime configuration parameters separated from your code.

Moreover, when properly used, secure strings will help keep your development and production code as similar as possible, while continuing to make your infrastructure code suitable for continuous deployment pipelines." https:// aws.amazon.com/blogs/mt/using-aws-systems-manager-parameterstore- secure-string-parameters-in-aws-cloudformation-templates/

https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentialsamazon-rds-database-types-oracle/

asked 16/09/2024
doaa elshazly
47 questions

Question 82

Report
Export
Collapse

A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.

Which AWS services should the Database Specialist consider? (Choose two.)

Amazon DynamoDB
Amazon DynamoDB
Amazon Redshift
Amazon Redshift
Amazon Neptune
Amazon Neptune
Amazon Elasticsearch Service
Amazon Elasticsearch Service
Amazon ElastiCache
Amazon ElastiCache
Suggested answer: A, E

Explanation:


https://docs.aws.amazon.com/AmazonElastiCache/latest/memug/Strategies.html#Strategies.WriteThrough

https://aws.amazon.com/products/databases/real-time-apps-elasticache-for-redis/

asked 16/09/2024
Francisco Jesús Cano Hinarejos
53 questions

Question 83

Report
Export
Collapse

A company has migrated a single MySQL database to Amazon Auror a. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data.

The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Suggested answer: A

Explanation:


asked 16/09/2024
Henock Asmerom
40 questions

Question 84

Report
Export
Collapse

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal.

Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event.

When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

Ensure the table is always provisioned to meet peak needs
Ensure the table is always provisioned to meet peak needs
Allow burst capacity to handle the additional load
Allow burst capacity to handle the additional load
Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-keydesign.html#bp-partition-key-throughput-bursting

"DynamoDB provides some flexibility in your per-partition throughput provisioning by providing burst capacity. Whenever you're not fully using a partition's throughput, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes. DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly— evenfaster than the per-second provisioned throughput capacity that you've defined for your table.

DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice. Note that these burst capacity details might change in the future."

asked 16/09/2024
Daniel Martos
44 questions

Question 85

Report
Export
Collapse

A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.

What change should the Database Specialist make to enable the migration?

Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
Suggested answer: A

Explanation:


"requires minimal downtime when the RDS DB instance goes live" in order to do CDC: "you must first ensure that ARCHIVELOG MODE is on to provide information to LogMiner. AWS DMS uses LogMiner to read information from the archive logs so that AWS DMS can capture changes"

https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configureoracle.html

"If you want to capture and apply changes (CDC), then you also need the following privileges."

asked 16/09/2024
mohammed zakir
38 questions

Question 86

Report
Export
Collapse

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

Create a snapshot of the old databases and restore the snapshot with the required storage
Create a snapshot of the old databases and restore the snapshot with the required storage
Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
Create a new database using native backup and restore
Create a new database using native backup and restore
Create a new read replica and make it the primary by terminating the existing primary
Create a new read replica and make it the primary by terminating the existing primary
Suggested answer: B

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-db-storage-size/Use AWS Database Migration Service (AWS DMS) for minimal downtime.

asked 16/09/2024
First Last
36 questions

Question 87

Report
Export
Collapse

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.

Which step should be taken to troubleshoot this issue?

Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
Ensure that the RDS DB instance has not reached its maximum connections limit
Ensure that the RDS DB instance has not reached its maximum connections limit
Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.SSL.Using.html

asked 16/09/2024
Richard Drayer Camacho
37 questions

Question 88

Report
Export
Collapse

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replic a. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.

Which should the database specialist do to allow the database team to create the test tables?

Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Suggested answer: D

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-read-replica/

asked 16/09/2024
Piotr Szwajkowski
39 questions

Question 89

Report
Export
Collapse

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster. Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
Create additional readers to cater to the different scenarios.
Create additional readers to cater to the different scenarios.
Use custom endpoints to satisfy the different workloads.
Use custom endpoints to satisfy the different workloads.
Suggested answer: D

Explanation:


https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-custom-endpoints/

You can now create custom endpoints for Amazon Aurora databases. This allows you to distribute and load balance workloads across different sets of database instances in your Aurora cluster. For example, you may provision a set of Aurora Replicas to use an instance type with higher memory capacity in order to run an analytics workload. A custom endpoint can then help you route the analytics workload to these appropriately-configured instances, while keeping other instances in your cluster isolated from this workload. As you add or remove instances from the custom endpointto match your workload, the endpoint helps spread the load around.

asked 16/09/2024
David Hartnett
41 questions

Question 90

Report
Export
Collapse

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing dat a. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.” The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.
Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-an-iam-role.html

"Now that you have created the new role, your next step is to attach it to your cluster. You can attach the role when you launch a new cluster or you can attach it to an existing cluster. In the next step, you attach the role to a new cluster."

https://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-access-permissions.html

asked 16/09/2024
Brian scott
28 questions
Total 321 questions
Go to page: of 33
Search

Related questions