ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











A database specialist is launching a test graph database using Amazon Neptune for the first time. The database specialist needs to insert millions of rows of test observations from a .csv file that is stored in Amazon S3. The database specialist has been using a series of API calls to upload the data to the Neptune DB instance.

Which combination of steps would allow the database specialist to upload the data faster? (Choose three.)

A.
Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
A.
Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
Answers
B.
Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
B.
Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
Answers
C.
Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
C.
Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
Answers
D.
Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
D.
Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
Answers
E.
Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
E.
Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
Answers
F.
Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint.
F.
Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint.
Answers
Suggested answer: B, E, F

Explanation:

Explanation from Amazon documents:To upload data faster to a Neptune DB instance from a .csv file stored in Amazon S3, the database specialist should use the Neptune Bulk Loader, which is a feature that allows you to load data from external files directly into a Neptune DB instance1. The Neptune Bulk Loader is faster and has less overhead than the API calls, such as SPARQL INSERT statements or Gremlin addV and addE steps2. The Neptune Bulk Loader supports both RDF and Gremlin data formats1.To use the Neptune Bulk Loader, the database specialist needs to do the following13:Ensure the vertices and edges are specified in different .csv files with proper header column formatting. This is required for the Gremlin data format, which uses two .csv files: one for vertices and one for edges. The first row of each file must contain the column names, which must match the property names of the graph elements. The files must also have a column named ~id for vertices and ~from and ~to for edges, which specify the unique identifiers of the graph elements1.Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket. This is required for the Neptune DB instance to read the data from the S3 bucket. The IAM role must have a trust policy that allows Neptune to assume the role, and a permissions policy that allows access to the S3 bucket and objects3.Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint. This is required for the Neptune DB instance to communicate with the S3 bucket without going through the public internet. The S3 VPC endpoint must be in the same VPC as the Neptune DB instance. The HTTP POST request must specify the source parameter as the S3 URI of the .csv file, and optionally other parameters such as format, failOnError, parallelism, etc1.Therefore, option B, E, and F are the correct steps to upload the data faster. Option A is not necessary because Amazon Cognito is not used for authenticating the Neptune DB instance to the S3 bucket. Option C is not suitable because AWS DMS is not designed for loading graph data into Neptune. Option D is not efficient because curling the S3 URI and running the addVertex or addEdge commands will be slower and more costly than using the Neptune Bulk Loader.

A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters. Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables.

Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)

A.
Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.
A.
Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.
Answers
B.
Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.
B.
Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.
Answers
C.
Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.
C.
Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.
Answers
D.
Use federated queries to access data in Amazon RDS.
D.
Use federated queries to access data in Amazon RDS.
Answers
E.
Use data sharing to access data from the other Redshift clusters.
E.
Use data sharing to access data from the other Redshift clusters.
Answers
F.
Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.
F.
Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.
Answers
Suggested answer: B, D, E

Explanation:

Explanation from Amazon documents:To allow an existing Redshift cluster to access data from other Redshift clusters, Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables, the database specialist should use the following features123:Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables. This feature allows you to query data stored in Amazon S3 using the AWS Glue Data Catalog as the metadata store. You can create external tables in your Redshift database that reference the data catalog tables and use SQL to query the data in S3. This feature is operationally efficient because it does not require moving or copying the data from S3 to Redshift1.Use federated queries to access data in Amazon RDS. This feature allows you to query and join data from one or more Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL databases with data already in your Amazon Redshift cluster. You can use SQL to query the RDS databases directly from your Redshift cluster without having to load or unload any data. This feature is operationally efficient because it reduces data movement and storage costs, and simplifies data access and analysis2.Use data sharing to access data from the other Redshift clusters. This feature allows you to securely share live data across different Redshift clusters without the complexity and delays associated with data copies and data movement. You can share data within or across AWS accounts using a consumer-producer model. The producer cluster grants privileges on one or more schemas, called datashares, to the consumer clusters. The consumer clusters can then query the shared data in the producer cluster as if it were local tables. This feature is operationally efficient because it enables real-time and transactionally consistent data access, and eliminates data duplication and stale data issues3.Therefore, option B, D, and E are the correct steps to meet the requirements with the most operational efficiency. Option A is not efficient because it involves taking and restoring snapshots, which can be time-consuming and costly. Option C is not efficient because it involves unloading and loading data between S3 and Redshift, which can also incur additional time and cost. Option F is not necessary because it involves transferring the AWS Glue Data Catalog tables into S3, which can be avoided by using external tables to connect to the data catalog tables directly.

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?

A.
Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.
A.
Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.
Answers
B.
Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.
B.
Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.
Answers
C.
Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.
C.
Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.
Answers
D.
Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.
D.
Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.
Answers
Suggested answer: B

Explanation:

Explanation from Amazon documents:The rds.log_retention_period parameter specifies how long your RDS for PostgreSQL DB instance keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes)123. By reducing the log retention period, you can free up storage space on your DB instance without affecting its availability or performance.To modify the rds.log_retention_period parameter, you need to use a custom DB parameter group for your RDS for PostgreSQL instance. You can modify the parameter value using the AWS Management Console, the AWS CLI, or the RDS API1. The parameter change is applied immediately, but it may take up to 24 hours for the database logs to be deleted2. Therefore, you do not need to reboot the DB instance to save the changes or to reclaim the storage space.Therefore, option B is the correct solution to meet the requirements. Option A is incorrect because setting the rds.log_retention_period parameter to 0 disables log retention and prevents you from viewing or downloading any database logs1. Rebooting the DB instance is also unnecessary and may cause downtime. Option C is incorrect because the temp file_limit parameter controls the maximum size of temporary files that a session can generate, not the size of database logs. Modifying this parameter will not reclaim any storage space on the DB instance. Option D is incorrect because rebooting the DB instance is not required to save the changes or to reclaim the storage space.

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

A.
Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
A.
Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
Answers
B.
Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.
B.
Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.
Answers
C.
Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
C.
Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
Answers
D.
Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.
D.
Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.
Answers
Suggested answer: A

Explanation:

Explanation from Amazon documents:Amazon Aurora PostgreSQL is a fully managed relational database service that is compatible with PostgreSQL. Aurora PostgreSQL offers up to three times better performance than standard PostgreSQL, as well as high availability, scalability, security, and durability. Aurora PostgreSQL also supports Babelfish, which is a new feature that enables Aurora to understand queries from applications written for Microsoft SQL Server. Babelfish allows you to migrate your SQL Server databases to Aurora PostgreSQL with minimal or no code changes, and run complex T-SQL queries and stored procedures on Aurora PostgreSQL.Migrating the database to Amazon Aurora PostgreSQL and turning on Babelfish will meet the requirements of minimizing database server maintenance and operating costs, and minimizing the need to rewrite code as part of the migration effort. This solution will allow the company to benefit from the performance, reliability, and cost-efficiency of Aurora PostgreSQL, while preserving the compatibility and functionality of SQL Server. The company will also avoid the hassle and expense of managing and licensing SQL Server on premises or on AWS.Therefore, option A is the correct solution to meet the requirements. Option B is not suitable because Amazon S3 is an object storage service that is not designed for OLTP workloads. Amazon Redshift Spectrum is a feature that allows you to query data in S3 using Amazon Redshift, but it is not compatible with SQL Server or T-SQL. Option C is not optimal because Amazon RDS for SQL Server is a managed relational database service that supports SQL Server, but it does not offer the same performance, scalability, or cost savings as Aurora PostgreSQL. Kerberos authentication is a security feature that does not affect the migration effort or the operating costs. Option D is not suitable because Amazon EMR is a big data processing service that runs Apache Hadoop and Spark clusters, not relational databases. EMR does not support SQL Server or T-SQL, and it is not optimized for OLTP workloads.

A company is running critical applications on AWS. Most of the application deployments use Amazon Aurora MySQL for the database stack. The company uses AWS CloudFormation to deploy the DB instances.

The company's application team recently implemented a CI/CD pipeline. A database engineer needs to integrate the database deployment CloudFormation stack with the newly built CllCD platform. Updates to the CloudFormation stack must not update existing production database resources.

Which CloudFormation stack policy action should the database engineer implement to meet these requirements?

A.
Use a Deny statement for the Update:Modify action on the production database resources.
A.
Use a Deny statement for the Update:Modify action on the production database resources.
Answers
B.
Use a Deny statement for the action on the production database resources.
B.
Use a Deny statement for the action on the production database resources.
Answers
C.
Use a Deny statement for the Update:Delete action on the production database resources.
C.
Use a Deny statement for the Update:Delete action on the production database resources.
Answers
D.
Use a Deny statement for the Update:Replace action on the production database resources.
D.
Use a Deny statement for the Update:Replace action on the production database resources.
Answers
Suggested answer: D

Explanation:

Explanation from Amazon documents:A CloudFormation stack policy is a JSON document that defines the update actions that can be performed on designated resources in a CloudFormation stack. A stack policy can be used to prevent accidental updates or deletions of stack resources, such as a production database.The Update:Replace action is an update action that replaces an existing resource with a new one during a stack update. This action can cause data loss or downtime for the resource. To prevent this action from affecting the production database resources, the database engineer should use a Deny statement for the Update:Replace action on the production database resources in the stack policy. This statement will override any Allow statements for the same action and resource, and protect the production database resources from being replaced during a stack update.Therefore, option D is the correct stack policy action to meet the requirements. Option A is incorrect because the Update:Modify action is not a valid update action for a stack policy. The valid update actions are Update:Replace, Update:Skip, and Delete. Option B is incorrect because it does not specify a valid update action for the Deny statement. Option C is incorrect because the Update:Delete action is not a valid update action for a stack policy. The valid update actions are Update:Replace, Update:Skip, and Delete.

A database administrator needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days.

Which solution will meet these requirements in the MOST operationally efficient way?

A.
Create a manual copy of the snapshot.
A.
Create a manual copy of the snapshot.
Answers
B.
Export the contents of the snapshot to an Amazon S3 bucket.
B.
Export the contents of the snapshot to an Amazon S3 bucket.
Answers
C.
Change the retention period of the snapshot to 45 days.
C.
Change the retention period of the snapshot to 45 days.
Answers
D.
Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.
D.
Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.
Answers
Suggested answer: A

Explanation:

Explanation from Amazon documents:Amazon RDS for Microsoft SQL Server supports two types of database snapshots: automated and manual. Automated snapshots are taken daily and are retained for a period of time that you specify, from 1 to 35 days. Manual snapshots are taken by you and are retained until you delete them.To save a particular automated database snapshot for longer than the maximum number of days, the database administrator can create a manual copy of the snapshot. This can be done using the AWS Management Console, the AWS CLI, or the RDS API. The manual copy of the snapshot will be retained until it is deleted, regardless of the retention period of the automated snapshot. This solution is the most operationally efficient way to meet the requirements, because it does not require any additional steps or resources.Therefore, option A is the correct solution to meet the requirements. Option B is not operationally efficient because it requires exporting the contents of the snapshot to an Amazon S3 bucket, which can be time-consuming and costly. Option C is not possible because the maximum retention period for automated snapshots is 35 days, not 45 days. Option D is not operationally efficient because it requires creating a native SQL Server backup and saving it to an Amazon S3 bucket, which can also be time-consuming and costly.

An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.

Which solution will meet these requirements?

A.
Turn on Performance Insights for the Aurora MySQL database. Configure and turn on Amazon DevOps Guru for RDS.
A.
Turn on Performance Insights for the Aurora MySQL database. Configure and turn on Amazon DevOps Guru for RDS.
Answers
B.
Create a CPU usage alarm. Select the CPU utilization metric for the DB instance. Create an Amazon Simple Notification Service (Amazon SNS) topic to notify the database specialist when CPU utilization is over 75%.
B.
Create a CPU usage alarm. Select the CPU utilization metric for the DB instance. Create an Amazon Simple Notification Service (Amazon SNS) topic to notify the database specialist when CPU utilization is over 75%.
Answers
C.
Use the Amazon RDS query editor to get the process ID of the query that is causing the database to lock. Run a command to end the process.
C.
Use the Amazon RDS query editor to get the process ID of the query that is causing the database to lock. Run a command to end the process.
Answers
D.
Use the SELECT INTO OUTFILE S3 statement to query data from the database. Save the data directly to an Amazon S3 bucket. Use Amazon Athena to analyze the files for long-running queries.
D.
Use the SELECT INTO OUTFILE S3 statement to query data from the database. Save the data directly to an Amazon S3 bucket. Use Amazon Athena to analyze the files for long-running queries.
Answers
Suggested answer: A

Explanation:

Explanation from Amazon documents:Performance Insights is a feature of Amazon Aurora MySQL that helps you quickly assess the load on your database and determine when and where to take action. Performance Insights displays a dashboard that shows the database load in terms of average active sessions (AAS), which is the average number of sessions that are actively running SQL statements at any given time. Performance Insights also shows the top SQL statements, waits, hosts, and users that are contributing to the database load.Amazon DevOps Guru is a fully managed service that helps you improve the operational performance and availability of your applications by detecting operational issues and recommending specific actions for remediation. Amazon DevOps Guru applies machine learning to automatically analyze data such as application metrics, logs, events, and traces for behaviors that deviate from normal operating patterns. Amazon DevOps Guru supports Amazon RDS as a resource type and can monitor the performance and availability of your RDS databases.By turning on Performance Insights for the Aurora MySQL database and configuring and turning on Amazon DevOps Guru for RDS, the database specialist can automatically detect and diagnose database performance issues and resolve bottlenecks faster. This solution will allow the database specialist to monitor the database load and identify the root causes of performance problems using Performance Insights, and receive actionable insights and recommendations from Amazon DevOps Guru to improve the operational performance and availability of the database.Therefore, option A is the correct solution to meet the requirements. Option B is not sufficient because creating a CPU usage alarm will only notify the database specialist when the CPU utilization is high, but it will not help diagnose or resolve the database performance issues. Option C is not efficient because using the Amazon RDS query editor to get the process ID of the query that is causing the database to lock and running a command to end the process will require manual intervention and may cause data loss or inconsistency. Option D is not efficient because using the SELECT INTO OUTFILE S3 statement to query data from the database and saving the data directly to an Amazon S3 bucket will incur additional time and cost, and using Amazon Athena to analyze the files for long-running queries will not help prevent or resolve locking contention on the database.

A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company needs to implement a scalable solution that is more resilient to database failures as quickly as possible.

Which solution will meet these requirements MOST cost-effectively?

A.
Migrate the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot. Change the endpoint in the Lambda functions to use the new database.
A.
Migrate the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot. Change the endpoint in the Lambda functions to use the new database.
Answers
B.
Migrate the Aurora MySQL database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS). Change the endpoint in the Lambda functions to use the new database.
B.
Migrate the Aurora MySQL database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS). Change the endpoint in the Lambda functions to use the new database.
Answers
C.
Create an Amazon EventBridge rule that invokes a Lambda function. Code the function to iterate over all existing connections and to call MySQL queries to end any connections in the sleep state.
C.
Create an Amazon EventBridge rule that invokes a Lambda function. Code the function to iterate over all existing connections and to call MySQL queries to end any connections in the sleep state.
Answers
D.
Increase the instance class for the Aurora database with more memory. Set a larger value for the max_connections parameter.
D.
Increase the instance class for the Aurora database with more memory. Set a larger value for the max_connections parameter.
Answers
Suggested answer: A

Explanation:

Explanation from Amazon documents:Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora MySQL that automatically starts up, shuts down, and scales capacity up or down based on your application's needs. Aurora Serverless is ideal for applications with unpredictable or intermittent traffic patterns that experience sudden spikes or drops in demand. Aurora Serverless also provides high availability and durability by replicating your data across multiple Availability Zones and continuously backing up your data to Amazon S3.Migrating the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot will meet the requirements of implementing a scalable solution that is more resilient to database failures as quickly as possible. This solution will allow the company to benefit from the auto-scaling and high availability features of Aurora Serverless, which will handle the unpredictable surges in traffic and prevent connection issues. This solution will also be cost-effective, as the company will only pay for the database capacity that they use. The migration process will be simple and fast, as the company can use the AWS Management Console, the AWS CLI, or the RDS API to restore a snapshot of their existing Aurora MySQL database to an Aurora Serverless DB cluster, and then change the endpoint in the Lambda functions to use the new database.Therefore, option A is the correct solution to meet the requirements. Option B is not cost-effective because migrating the Aurora MySQL database to Amazon DynamoDB tables by using AWS DMS will incur additional time and cost, and may require significant code changes to adapt to a different data model and query language. Option C is not scalable because creating an EventBridge rule that invokes a Lambda function to end any connections in the sleep state will not address the root cause of the connection issues, which is the lack of database capacity to handle the traffic spikes. Option D is not scalable because increasing the instance class for the Aurora database with more memory and setting a larger value for the max_connections parameter will not provide auto-scaling or high availability, and may still result in connection issues if the traffic exceeds the provisioned capacity.

A company has an Amazon Redshift cluster with database audit logging enabled. A security audit shows that raw SQL statements that run against the Redshift cluster are being logged to an Amazon S3 bucket. The security team requires that authentication logs are generated for use in an intrusion detection system (IDS), but the security team does not require SQL queries.

What should a database specialist do to remediate this issue?

A.
Set the parameter to true in the database parameter group.
A.
Set the parameter to true in the database parameter group.
Answers
B.
Turn off the query monitoring rule in the Redshift cluster's workload management (WLM).
B.
Turn off the query monitoring rule in the Redshift cluster's workload management (WLM).
Answers
C.
Set the enable_user_activity_logging parameter to false in the database parameter group.
C.
Set the enable_user_activity_logging parameter to false in the database parameter group.
Answers
D.
Disable audit logging on the Redshift cluster.
D.
Disable audit logging on the Redshift cluster.
Answers
Suggested answer: C

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?

A.
Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0
A.
Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0
Answers
B.
Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1
B.
Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1
Answers
C.
Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture
C.
Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture
Answers
D.
Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan
D.
Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan
Answers
Suggested answer: A

Explanation:

Explanation from Amazon documents:Amazon Aurora PostgreSQL supports cluster cache management, which is a feature that helps reduce the impact of failover on query performance by preserving the cache of the primary DB instance on one or more Aurora Replicas. Cluster cache management allows you to assign a promotion priority tier to each DB instance in your Aurora DB cluster. The promotion priority tier determines the order in which Aurora Replicas are considered for promotion to the primary instance after a failover. The lower the numerical value of the tier, the higher the priority.By enabling cluster cache management for the Aurora DB cluster and setting the promotion priority for the writer DB instance and replica to tier-0, the database specialist can minimize the performance degradation after failover. This solution will ensure that the primary DB instance and one Aurora Replica have the same cache contents and are in the same promotion priority tier. In the event of a failover, Aurora will promote the tier-0 replica to the primary role, and the cache will be preserved. This will reduce the number of cache misses and improve query performance after failover.Therefore, option A is the correct solution to minimize the performance degradation after failover. Option B is incorrect because setting the promotion priority for the writer DB instance and replica to tier-1 will not preserve the cache after failover. Aurora will first try to promote a tier-0 replica, which may have a different cache than the primary instance. Option C is incorrect because enabling Query Plan Management and performing a manual plan capture will not affect the cache behavior after failover. Query Plan Management is a feature that helps you control query execution plans and improve query performance by creating and enforcing custom execution plans. Option D is incorrect because enabling Query Plan Management and forcing the query optimizer to use the desired plan will not affect the cache behavior after failover. Forcing the query optimizer to use a desired plan may improve query performance by avoiding suboptimal plans, but it will not prevent cache misses after failover.

Total 321 questions
Go to page: of 33