ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 9

Question list
Search
Search

List of questions

Search

Related questions











A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository.

The company also needs to meet compliance requirement by routinely rotating its database master password for production.

What is most secure solution to store the master password?

A.
Store the master password in a parameter file in each environment. Reference the environment specific parameter file in the CloudFormation template.
A.
Store the master password in a parameter file in each environment. Reference the environment specific parameter file in the CloudFormation template.
Answers
B.
Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
B.
Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
Answers
C.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
C.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
Answers
D.
Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
D.
Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
Answers
Suggested answer: C

Explanation:

"By using the secure string support in CloudFormation with dynamic references you can better maintain your infrastructure as code. You’ll be able to avoid hard coding passwords into your templates and you can keep these runtime configuration parameters separated from your code.

Moreover, when properly used, secure strings will help keep your development and production code as similar as possible, while continuing to make your infrastructure code suitable for continuous deployment pipelines." https:// aws.amazon.com/blogs/mt/using-aws-systems-manager-parameterstore- secure-string-parameters-in-aws-cloudformation-templates/

https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentialsamazon-rds-database-types-oracle/

A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.

Which AWS services should the Database Specialist consider? (Choose two.)

A.
Amazon DynamoDB
A.
Amazon DynamoDB
Answers
B.
Amazon Redshift
B.
Amazon Redshift
Answers
C.
Amazon Neptune
C.
Amazon Neptune
Answers
D.
Amazon Elasticsearch Service
D.
Amazon Elasticsearch Service
Answers
E.
Amazon ElastiCache
E.
Amazon ElastiCache
Answers
Suggested answer: A, E

Explanation:


https://docs.aws.amazon.com/AmazonElastiCache/latest/memug/Strategies.html#Strategies.WriteThrough

https://aws.amazon.com/products/databases/real-time-apps-elasticache-for-redis/

A company has migrated a single MySQL database to Amazon Auror a. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data.

The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

A.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
A.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Answers
B.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
B.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.
Answers
C.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
C.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.
Answers
D.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
D.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.
Answers
Suggested answer: A

Explanation:


A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal.

Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event.

When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

A.
Ensure the table is always provisioned to meet peak needs
A.
Ensure the table is always provisioned to meet peak needs
Answers
B.
Allow burst capacity to handle the additional load
B.
Allow burst capacity to handle the additional load
Answers
C.
Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
C.
Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
Answers
D.
Preprovision additional capacity for the known peaks and then reduce the capacity after the event
D.
Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Answers
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-keydesign.html#bp-partition-key-throughput-bursting

"DynamoDB provides some flexibility in your per-partition throughput provisioning by providing burst capacity. Whenever you're not fully using a partition's throughput, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes. DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly— evenfaster than the per-second provisioned throughput capacity that you've defined for your table.

DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice. Note that these burst capacity details might change in the future."

A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.

What change should the Database Specialist make to enable the migration?

A.
Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
A.
Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
Answers
B.
Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
B.
Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
Answers
C.
Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
C.
Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
Answers
D.
Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
D.
Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
Answers
Suggested answer: A

Explanation:


"requires minimal downtime when the RDS DB instance goes live" in order to do CDC: "you must first ensure that ARCHIVELOG MODE is on to provide information to LogMiner. AWS DMS uses LogMiner to read information from the archive logs so that AWS DMS can capture changes"

https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configureoracle.html

"If you want to capture and apply changes (CDC), then you also need the following privileges."

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

A.
Create a snapshot of the old databases and restore the snapshot with the required storage
A.
Create a snapshot of the old databases and restore the snapshot with the required storage
Answers
B.
Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
B.
Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
Answers
C.
Create a new database using native backup and restore
C.
Create a new database using native backup and restore
Answers
D.
Create a new read replica and make it the primary by terminating the existing primary
D.
Create a new read replica and make it the primary by terminating the existing primary
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-db-storage-size/Use AWS Database Migration Service (AWS DMS) for minimal downtime.

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.

Which step should be taken to troubleshoot this issue?

A.
Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
A.
Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
Answers
B.
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
B.
Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
Answers
C.
Ensure that the RDS DB instance has not reached its maximum connections limit
C.
Ensure that the RDS DB instance has not reached its maximum connections limit
Answers
D.
Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
D.
Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Answers
Suggested answer: D

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.SSL.Using.html

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replic a. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.

Which should the database specialist do to allow the database team to create the test tables?

A.
Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
A.
Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
Answers
B.
Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
B.
Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
Answers
C.
Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
C.
Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
Answers
D.
Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
D.
Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-read-replica/

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster. Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

A.
Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
A.
Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
Answers
B.
Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
B.
Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
Answers
C.
Create additional readers to cater to the different scenarios.
C.
Create additional readers to cater to the different scenarios.
Answers
D.
Use custom endpoints to satisfy the different workloads.
D.
Use custom endpoints to satisfy the different workloads.
Answers
Suggested answer: D

Explanation:


https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-custom-endpoints/

You can now create custom endpoints for Amazon Aurora databases. This allows you to distribute and load balance workloads across different sets of database instances in your Aurora cluster. For example, you may provision a set of Aurora Replicas to use an instance type with higher memory capacity in order to run an analytics workload. A custom endpoint can then help you route the analytics workload to these appropriately-configured instances, while keeping other instances in your cluster isolated from this workload. As you add or remove instances from the custom endpointto match your workload, the endpoint helps spread the load around.

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing dat a. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.” The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

A.
Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
A.
Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
Answers
B.
Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
B.
Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
Answers
C.
Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
C.
Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
Answers
D.
Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.
D.
Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-an-iam-role.html

"Now that you have created the new role, your next step is to attach it to your cluster. You can attach the role when you launch a new cluster or you can attach it to an existing cluster. In the next step, you attach the role to a new cluster."

https://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-access-permissions.html

Total 321 questions
Go to page: of 33