ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

A.
Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
A.
Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Answers
B.
Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
B.
Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
Answers
C.
Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
C.
Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
Answers
D.
Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
D.
Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.htm

https://aws.amazon.com/premiumsupport/knowledge-center/rds-aurora-mysql-logs-cloudwatch/

https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutRetentionPolicy.html

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

A.
Aurora will promote an Aurora Replica that is of the same size as the primary instance
A.
Aurora will promote an Aurora Replica that is of the same size as the primary instance
Answers
B.
Aurora will promote an arbitrary Aurora Replica
B.
Aurora will promote an arbitrary Aurora Replica
Answers
C.
Aurora will promote the largest-sized Aurora Replica
C.
Aurora will promote the largest-sized Aurora Replica
Answers
D.
Aurora will not promote an Aurora Replica
D.
Aurora will not promote an Aurora Replica
Answers
Suggested answer: C

Explanation:


Priority: If you don't select a value, the default is tier-1. This priority determines the order in which Aurora

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/aurora-replicasadding.html

More than one Aurora Replica can share the same priority, resulting in promotion tiers. If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

A.
Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
A.
Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
Answers
B.
Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
B.
Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
Answers
C.
Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
C.
Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
Answers
D.
Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
D.
Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/blogs/database/best-practices-for-migrating-rds-for-mysql-databases-toamazon-aurora/

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html#AuroraPostgreSQL.Migrating.RDSPostgreSQL.Replica

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

A.
Use pg_audit to generate audit logs and send the logs to the Security team.
A.
Use pg_audit to generate audit logs and send the logs to the Security team.
Answers
B.
Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
B.
Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
Answers
C.
Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
C.
Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
Answers
D.
Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
D.
Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresqlcompatibility-supports-database-activity-streams/" Database Activity Streams for Amazon Aurora with PostgreSQL compatibility provides a near real-time data stream of the database activity in your relational database to help you monitor activity.

When integrated with third party database activity monitoring tools, Database Activity Streams can monitor and audit database activity to provide safeguards for your database and help meet compliance and regulatory requirements."

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.LoggingAndMonitoring.html

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

A.
Deploy multiple read replicas and have the team members make changes to separate replica instances
A.
Deploy multiple read replicas and have the team members make changes to separate replica instances
Answers
B.
Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
B.
Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
Answers
C.
Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
C.
Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
Answers
D.
Enable the Amazon RDS for MySQL Backtrack feature
D.
Enable the Amazon RDS for MySQL Backtrack feature
Answers
Suggested answer: C

Explanation:


"Amazon Aurora, a fully-managed relational database service in AWS, is now offering a backtrack feature. With Amazon Aurora with MySQL compatibility, users can backtrack, or "rewind", a database cluster to a specific point in time, without restoring data from a backup. The backtrack process allows a point in time to be specified with one second resolution, and the rewind process typically takes minutes. This new feature facilitates developers in undoing mistakes like deleting data inappropriately or dropping the wrong table."

A media company is using Amazon RDS for PostgreSQL to store user dat a. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.

Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.

Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

A.
Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
A.
Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
Answers
B.
Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
B.
Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
Answers
C.
Move the DB instance to a private subnet using AWS DMS.
C.
Move the DB instance to a private subnet using AWS DMS.
Answers
D.
Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
D.
Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
Answers
E.
Disable the publicly accessible setting.
E.
Disable the publicly accessible setting.
Answers
F.
Connect to the DB instance using private IPs and a VPN.
F.
Connect to the DB instance using private IPs and a VPN.
Answers
Suggested answer: B, E, F

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Hiding

A company is about to launch a new product, and test databases must be re-created from production dat a. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.

What should the Database Specialist do to meet these requirements?

A.
Restore a snapshot from the production cluster into test clusters
A.
Restore a snapshot from the production cluster into test clusters
Answers
B.
Create logical dumps of the production cluster and restore them into new test clusters
B.
Create logical dumps of the production cluster and restore them into new test clusters
Answers
C.
Use database cloning to create clones of the production cluster
C.
Use database cloning to create clones of the production cluster
Answers
D.
Add an additional read replica to the production cluster and use that node for testing
D.
Add an additional read replica to the production cluster and use that node for testing
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/getting-started/hands-on/aurora-cloning-backtracking/

"Cloning an Aurora cluster is extremely useful if you want to assess the impact of changes to your database, or if you need to perform workload-intensive operations—such as exporting data or running analytical queries, or simply if you want to use a copy of your production database in a development or testing environment. You can make multiple clones of your Aurora DB cluster. You can even create additional clones from other clones, with the constraint that the clone databases must be created in the same region as the source databases.

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, apsouthheast- 1, and us-east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.

Which set of actions will meet these requirements?

A.
Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
A.
Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Answers
B.
Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
B.
Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Answers
C.
Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
C.
Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Answers
D.
Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
D.
Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/rds/features/read-replicas/

"Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. "

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.XRgn.html

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL.

The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover.

The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
Answers
B.
Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
B.
Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
Answers
C.
Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
C.
Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
Answers
D.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
D.
Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Answers
Suggested answer: D

Explanation:


"To ensure that your data was migrated accurately from the source to the target, we highly recommend that you use data validation."

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html

Reference: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html

A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time.

All databases are running on Amazon RDS for MySQL.

The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.

How should the Database Specialist edit the script to fix this issue?

A.
Stop the source instances before stopping their read replicas
A.
Stop the source instances before stopping their read replicas
Answers
B.
Delete each read replica before stopping its corresponding source instance
B.
Delete each read replica before stopping its corresponding source instance
Answers
C.
Stop the read replicas before stopping their source instances
C.
Stop the read replicas before stopping their source instances
Answers
D.
Use the AWS CLI to stop each read replica and source instance at the same time
D.
Use the AWS CLI to stop each read replica and source instance at the same time
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html

"The following are some limitations to stopping and starting a DB instance: You can't stop a DB instance that has a read replica, or that is a read replica." So if you cant stop a db with a read replica,you have to delete the read replica first to then stop it???

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html#USER_MySQL.Replication.ReadReplicas.StartStop

Total 321 questions
Go to page: of 33