ExamGecko
Home / Amazon / DBS-C01 / List of questions
Ask Question

Amazon DBS-C01 Practice Test - Questions Answers, Page 5

List of questions

Question 41

Report
Export
Collapse

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.htm

https://aws.amazon.com/premiumsupport/knowledge-center/rds-aurora-mysql-logs-cloudwatch/

https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutRetentionPolicy.html

asked 16/09/2024
Ajayi Johnson
45 questions

Question 42

Report
Export
Collapse

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

Aurora will promote an Aurora Replica that is of the same size as the primary instance
Aurora will promote an Aurora Replica that is of the same size as the primary instance
Aurora will promote an arbitrary Aurora Replica
Aurora will promote an arbitrary Aurora Replica
Aurora will promote the largest-sized Aurora Replica
Aurora will promote the largest-sized Aurora Replica
Aurora will not promote an Aurora Replica
Aurora will not promote an Aurora Replica
Suggested answer: C

Explanation:


Priority: If you don't select a value, the default is tier-1. This priority determines the order in which Aurora

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/aurora-replicasadding.html

More than one Aurora Replica can share the same priority, resulting in promotion tiers. If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html

asked 16/09/2024
Herr Eylem Bulut
46 questions

Question 43

Report
Export
Collapse

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Suggested answer: C

Explanation:


https://aws.amazon.com/blogs/database/best-practices-for-migrating-rds-for-mysql-databases-toamazon-aurora/

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html#AuroraPostgreSQL.Migrating.RDSPostgreSQL.Replica

asked 16/09/2024
Hoang Son
47 questions

Question 44

Report
Export
Collapse

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

Use pg_audit to generate audit logs and send the logs to the Security team.
Use pg_audit to generate audit logs and send the logs to the Security team.
Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
Suggested answer: C

Explanation:


https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresqlcompatibility-supports-database-activity-streams/" Database Activity Streams for Amazon Aurora with PostgreSQL compatibility provides a near real-time data stream of the database activity in your relational database to help you monitor activity.

When integrated with third party database activity monitoring tools, Database Activity Streams can monitor and audit database activity to provide safeguards for your database and help meet compliance and regulatory requirements."

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.LoggingAndMonitoring.html

asked 16/09/2024
Michael Craig
42 questions

Question 45

Report
Export
Collapse

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

Deploy multiple read replicas and have the team members make changes to separate replica instances
Deploy multiple read replicas and have the team members make changes to separate replica instances
Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
Enable the Amazon RDS for MySQL Backtrack feature
Enable the Amazon RDS for MySQL Backtrack feature
Suggested answer: C

Explanation:


"Amazon Aurora, a fully-managed relational database service in AWS, is now offering a backtrack feature. With Amazon Aurora with MySQL compatibility, users can backtrack, or "rewind", a database cluster to a specific point in time, without restoring data from a backup. The backtrack process allows a point in time to be specified with one second resolution, and the rewind process typically takes minutes. This new feature facilitates developers in undoing mistakes like deleting data inappropriately or dropping the wrong table."

asked 16/09/2024
Russo, Anna
25 questions

Question 46

Report
Export
Collapse

A media company is using Amazon RDS for PostgreSQL to store user dat a. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.

Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.

Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
Move the DB instance to a private subnet using AWS DMS.
Move the DB instance to a private subnet using AWS DMS.
Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
Disable the publicly accessible setting.
Disable the publicly accessible setting.
Connect to the DB instance using private IPs and a VPN.
Connect to the DB instance using private IPs and a VPN.
Suggested answer: B, E, F

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Hiding

asked 16/09/2024
Ilias Akarkach
40 questions

Question 47

Report
Export
Collapse

A company is about to launch a new product, and test databases must be re-created from production dat a. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.

What should the Database Specialist do to meet these requirements?

Restore a snapshot from the production cluster into test clusters
Restore a snapshot from the production cluster into test clusters
Create logical dumps of the production cluster and restore them into new test clusters
Create logical dumps of the production cluster and restore them into new test clusters
Use database cloning to create clones of the production cluster
Use database cloning to create clones of the production cluster
Add an additional read replica to the production cluster and use that node for testing
Add an additional read replica to the production cluster and use that node for testing
Suggested answer: C

Explanation:


https://aws.amazon.com/getting-started/hands-on/aurora-cloning-backtracking/

"Cloning an Aurora cluster is extremely useful if you want to assess the impact of changes to your database, or if you need to perform workload-intensive operations—such as exporting data or running analytical queries, or simply if you want to use a copy of your production database in a development or testing environment. You can make multiple clones of your Aurora DB cluster. You can even create additional clones from other clones, with the constraint that the clone databases must be created in the same region as the source databases.

asked 16/09/2024
Mareah Allawi
31 questions

Question 48

Report
Export
Collapse

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, apsouthheast- 1, and us-east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.

Which set of actions will meet these requirements?

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Suggested answer: B

Explanation:


https://aws.amazon.com/rds/features/read-replicas/

"Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. "

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.XRgn.html

asked 16/09/2024
Jeff Fazio
43 questions

Question 49

Report
Export
Collapse

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL.

The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover.

The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Suggested answer: D

Explanation:


"To ensure that your data was migrated accurately from the source to the target, we highly recommend that you use data validation."

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html

Reference: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html

asked 16/09/2024
Jason Coleman
39 questions

Question 50

Report
Export
Collapse

A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time.

All databases are running on Amazon RDS for MySQL.

The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.

How should the Database Specialist edit the script to fix this issue?

Stop the source instances before stopping their read replicas
Stop the source instances before stopping their read replicas
Delete each read replica before stopping its corresponding source instance
Delete each read replica before stopping its corresponding source instance
Stop the read replicas before stopping their source instances
Stop the read replicas before stopping their source instances
Use the AWS CLI to stop each read replica and source instance at the same time
Use the AWS CLI to stop each read replica and source instance at the same time
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html

"The following are some limitations to stopping and starting a DB instance: You can't stop a DB instance that has a read replica, or that is a read replica." So if you cant stop a db with a read replica,you have to delete the read replica first to then stop it???

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html#USER_MySQL.Replication.ReadReplicas.StartStop

asked 16/09/2024
gregory koontz
42 questions
Total 321 questions
Go to page: of 33
Search

Related questions