ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 30

Question list
Search
Search

List of questions

Search

Related questions











A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target dat a. The company has manually removed the existing credit card data from the S3 bucket.

What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?

A.
Add a transformation rule to the DMS task to ignore the column from the source data endpoint.
A.
Add a transformation rule to the DMS task to ignore the column from the source data endpoint.
Answers
B.
Add a transformation rule to the DMS task to mask the column by using a simple SQL query.
B.
Add a transformation rule to the DMS task to mask the column by using a simple SQL query.
Answers
C.
Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).
C.
Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).
Answers
D.
Remove the credit card number column from the data source so that the DMS task does not need to be altered.
D.
Remove the credit card number column from the data source so that the DMS task does not need to be altered.
Answers
Suggested answer: A

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.

Which solution will meet this requirement with the LEAST operational overhead?

A.
Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance
A.
Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance
Answers
B.
Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.
B.
Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.
Answers
C.
Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU
C.
Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU
Answers
D.
Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights
D.
Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights
Answers
Suggested answer: C

Explanation:

Explanation from Amazon documents:Amazon RDS for Oracle is a fully managed relational database service that supports Oracle Database. Amazon RDS for Oracle provides several features to monitor the performance and health of your database, such as RDS Performance Insights, Enhanced Monitoring, Amazon CloudWatch, and AWS CloudTrail.RDS Performance Insights is a feature that helps you quickly assess the load on your database and determine when and where to take action. RDS Performance Insights displays a dashboard that shows the database load in terms of average active sessions (AAS), which is the average number of sessions that are actively running SQL statements at any given time. RDS Performance Insights also shows the top SQL statements, waits, hosts, and users that are contributing to the database load.Enhanced Monitoring is a feature that provides metrics in real time for the operating system (OS) that your DB instance runs on. Enhanced Monitoring metrics include CPU utilization, memory, file system, disk I/O, network I/O, process list, and thread count. Enhanced Monitoring allows you to view how different threads use the CPU and how much memory each thread consumes.By enabling RDS Performance Insights and Enhanced Monitoring for the RDS for Oracle DB instance, the database specialist can monitor the latency of the database with the least operational overhead. This solution will allow the database specialist to use the RDS console or API to enable these features and view the metrics and dashboards without installing any additional software or tools. This solution will also provide comprehensive and granular information about the database load and resource utilization.Therefore, option C is the correct solution to meet the requirement. Option A is not optimal because publishing RDS Performance Insights metrics to Amazon CloudWatch and adding AWS CloudTrail filters to monitor database performance will incur additional operational overhead and cost. Amazon CloudWatch is a service that collects monitoring and operational data in the form of logs, metrics, and events. AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you. These services are useful for monitoring performance trends and auditing activities, but they are not necessary for monitoring latency in real time. Option B is not optimal because installing Oracle Statspack and enabling the performance statistics feature will require manual intervention and configuration on the RDS for Oracle DB instance. Oracle Statspack is a tool that collects, stores, and displays performance data for Oracle Database. The performance statistics feature is an option that enables Statspack to collect additional statistics such as wait events, latches, SQL statements, segments, rollback segments, etc. These tools are useful for performance tuning and troubleshooting, but they are not as easy to use as RDS Performance Insights and Enhanced Monitoring. Option D is not relevant because creating a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables will not help monitor the latency of the database. A DB parameter group is a collection of DB engine configuration values that define how a DB instance operates. The AllocatedStorage parameter specifies the allocated storage size in gibibytes (GiB). The DBInstanceClassMemory parameter specifies the amount of memory available to an instance class in bytes. The DBInstanceVCPU parameter specifies the number of virtual CPUs available to an instance class. These parameters are used to configure the capacity and performance of a DB instance, but they do not provide any monitoring or metrics information. Enabling RDS Performance Insights alone will not provide enough information about the OS-level metrics such as CPU utilization or memory usage.

A database specialist needs to enable IAM authentication on an existing Amazon Aurora PostgreSQL DB cluster. The database specialist already has modified the DB cluster settings, has created IAM and database credentials, and has distributed the credentials to the appropriate users.

What should the database specialist do next to establish the credentials for the users to use to log in to the DB cluster?

A.
Add the users' IAM credentials to the Aurora cluster parameter group.
A.
Add the users' IAM credentials to the Aurora cluster parameter group.
Answers
B.
Run the generate-db-auth-token command with the user names to generate a temporary password for the users.
B.
Run the generate-db-auth-token command with the user names to generate a temporary password for the users.
Answers
C.
Add the users' IAM credentials to the default credential profile, Use the AWS Management Console to access the DB cluster.
C.
Add the users' IAM credentials to the default credential profile, Use the AWS Management Console to access the DB cluster.
Answers
D.
Use an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint.
D.
Use an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint.
Answers
Suggested answer: B

Explanation:

Explanation from Amazon documents:Amazon Aurora PostgreSQL supports IAM authentication, which is a method of using AWS Identity and Access Management (IAM) to manage database access. IAM authentication allows you to use IAM users and roles to control who can access your Aurora PostgreSQL DB cluster, instead of using a traditional database username and password. IAM authentication also provides more security by using temporary credentials that are automatically rotated.To enable IAM authentication on an existing Aurora PostgreSQL DB cluster, the database specialist needs to do the following :Modify the DB cluster settings to enable IAM database authentication. This can be done using the AWS Management Console, the AWS CLI, or the RDS API.Create IAM and database credentials for each user who needs access to the DB cluster. The IAM credentials consist of an access key ID and a secret access key. The database credentials consist of a database username and an optional password. The IAM credentials and the database username must match.Distribute the IAM and database credentials to the appropriate users. The users must keep their credentials secure and not share them with anyone else.Run the generate-db-auth-token command with the user names to generate a temporary password for the users. This command is part of the AWS CLI and it generates an authentication token that is valid for 15 minutes. The authentication token is a string that has the same format as a password. The users can use this token as their password when they connect to the DB cluster using a SQL client.Therefore, option B is the correct solution to establish the credentials for the users to use to log in to the DB cluster. Option A is incorrect because adding the users' IAM credentials to the Aurora cluster parameter group is not necessary or possible. A cluster parameter group is a collection of DB engine configuration values that define how a DB cluster operates. Option C is incorrect because adding the users' IAM credentials to the default credential profile and using the AWS Management Console to access the DB cluster is not supported or secure. The default credential profile is a file that stores your AWS credentials for use by AWS CLI or SDKs. The AWS Management Console does not allow you to connect to an Aurora PostgreSQL DB cluster using IAM authentication. Option D is incorrect because using an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint is not supported or secure. AWS STS is a service that enables you to request temporary, limited-privilege credentials for IAM users or federated users. The DB cluster API endpoint is an endpoint that allows you to perform administrative actions on your DB cluster using RDS API calls.

A large financial services company uses Amazon ElastiCache for Redis for its new application that has a global user base. A database administrator must develop a caching solution that will be available across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers.

Which solution meets these requirements with the LEAST amount of operational effort?

A.
Enable cluster mode in ElastiCache for Redis. Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.
A.
Enable cluster mode in ElastiCache for Redis. Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.
Answers
B.
Create a global datastore in ElastiCache for Redis. Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.
B.
Create a global datastore in ElastiCache for Redis. Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.
Answers
C.
Disable cluster mode in ElastiCache for Redis. Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.
C.
Disable cluster mode in ElastiCache for Redis. Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.
Answers
D.
Create a snapshot of ElastiCache for Redis in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.
D.
Create a snapshot of ElastiCache for Redis in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.
Answers
Suggested answer: B

Explanation:

Explanation from Amazon documents:Amazon ElastiCache for Redis is a fully managed in-memory data store that supports Redis, an open source, key-value database. Amazon ElastiCache for Redis provides several features to enhance the performance, availability, scalability, and security of your Redis data, such as cluster mode, global datastore, replication groups, snapshots, and encryption.A global datastore is a feature that allows you to create a cross-Region read replica of your ElastiCache for Redis cluster. A global datastore consists of a primary cluster that is replicated across up to two other Regions as secondary clusters. A global datastore provides low-latency reads and high availability for your Redis data across Regions. A global datastore also supports encryption of cross-Region data transfers using AWS Key Management Service (AWS KMS).To create a global datastore in ElastiCache for Redis, you need to do the following:Create a primary cluster in one Region. You can use an existing cluster or create a new one. The cluster must have cluster mode enabled and use Redis engine version 5.0.6 or later.Create a global datastore and add the primary cluster to it. You can use the AWS Management Console, the AWS CLI, or the ElastiCache API to create a global datastore.Create one or two secondary clusters in other Regions and add them to the global datastore. The secondary clusters must have the same specifications as the primary cluster, such as node type, number of shards, and number of replicas per shard.Enable encryption in transit and at rest for the primary and secondary clusters. Specify a customer master key (CMK) from AWS KMS for each cluster.By creating a global datastore in ElastiCache for Redis and creating replica clusters in two other Regions, the database administrator can develop a caching solution that will be available across Regions and include low-latency replication and failover capabilities for DR. This solution will also meet the security requirement of encrypting cross-Region data transfers using AWS KMS. This solution will also require the least amount of operational effort, as it does not involve any data migration or manual intervention.Therefore, option B is the correct solution to meet the requirements. Option A is not optimal because enabling cluster mode in ElastiCache for Redis and creating multiple clusters across Regions will not provide cross-Region replication or failover capabilities. Using AWS DMS to replicate the cache data by using AWS DMS will incur additional time and cost, and may not support encryption of cross-Region data transfers. Option C is not optimal because disabling cluster mode in ElastiCache for Redis and creating multiple replication groups across Regions will not provide cross-Region replication or failover capabilities. Using AWS DMS to replicate the cache data will incur additional time and cost, and may not support encryption of cross-Region data transfers. Option D is not optimal because creating a snapshot of ElastiCache for Redis in the primary Region and copying it to the failover Region will not provide low-latency replication or high availability for the Redis data across Regions. Using the snapshot to restore the cluster from the failover Region when DR is required will involve manual intervention and downtime.

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

A.
Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.
A.
Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.
Answers
B.
Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.
B.
Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.
Answers
C.
Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.
C.
Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.
Answers
D.
Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.
D.
Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.
Answers
Suggested answer: B

Explanation:

Create a new AWS Key Management Service (AWS KMS)Explanation from Amazon documents:Amazon Redshift supports encryption at rest using AWS Key Management Service (AWS KMS) customer master keys (CMKs). To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.Option A is incorrect because you cannot copy a CMK from one Region to another. You can only import key material from an external source into a CMK in a specific Region. Option C is incorrect because it involves unnecessary steps of copying snapshots to S3 buckets and using S3 Cross-Region Replication. Option D is incorrect because it is not possible to create a CMK with the same private key as another CMK in a different Region. You can only use customer-supplied key material to create a CMK with a specific key ID in a specific Region.

A coffee machine manufacturer is equipping all of its coffee machines with 10T sensors. The 10T core application is writing measurements for each record to Amazon Timestream. The records have multiple dimensions and measures. The measures include multiple measure names and values.

An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application.

Which solution will meet these requirements?

A.
Ensure that queries contain whole records over the relevant time range.
A.
Ensure that queries contain whole records over the relevant time range.
Answers
B.
Use time range, measure name, and dimensions in the WHERE clause of the query.
B.
Use time range, measure name, and dimensions in the WHERE clause of the query.
Answers
C.
Avoid canceling any query after the query starts running.
C.
Avoid canceling any query after the query starts running.
Answers
D.
Implement exponential backoff in the application.
D.
Implement exponential backoff in the application.
Answers
Suggested answer: B

Explanation:

Use time range, measure name, and dimensions in the WHEREExplanation from Amazon documents:Amazon Timestream is a serverless time series database service that allows you to store and analyze time series data at any scale. To optimize the cost of queries, you should use the following best practices1:Include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans and therefore will also increase the query cost.Include a time range in the WHERE clause of your query. For example, if you only need the last one hour of data in your dataset, include a time predicate such as time > ago (1h).Include the measure names in the WHERE clause of the query when a query accesses a subset of measures in a table.Option B follows these best practices, while option A does not. Option C is incorrect because canceling a query can save on cost if the query will not return the desired results1. Option D is irrelevant because exponential backoff is a technique to handle throttling errors, not to optimize query costs2.

A database specialist needs to replace the encryption key for an Amazon RDS DB instance. The database specialist needs to take immediate action to ensure security of the database.

Which solution will meet these requirements?

A.
Modify the DB instance to update the encryption key. Perform this update immediately without waiting for the next scheduled maintenance window.
A.
Modify the DB instance to update the encryption key. Perform this update immediately without waiting for the next scheduled maintenance window.
Answers
B.
Export the database to an Amazon S3 bucket. Import the data to an existing DB instance by using the export file. Specify a new encryption key during the import process.
B.
Export the database to an Amazon S3 bucket. Import the data to an existing DB instance by using the export file. Specify a new encryption key during the import process.
Answers
C.
Create a manual snapshot of the DB instance. Create an encrypted copy of the snapshot by using a new encryption key. Create a new DB instance from the encrypted snapshot.
C.
Create a manual snapshot of the DB instance. Create an encrypted copy of the snapshot by using a new encryption key. Create a new DB instance from the encrypted snapshot.
Answers
D.
Create a manual snapshot of the DB instance. Restore the snapshot to a new DB instance. Specify a new encryption key during the restoration process.
D.
Create a manual snapshot of the DB instance. Restore the snapshot to a new DB instance. Specify a new encryption key during the restoration process.
Answers
Suggested answer: D

A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.

Which architecture will meet these requirements in the MOST operationally efficient way?

A.
Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
A.
Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
Answers
B.
Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDB. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
B.
Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDB. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
Answers
C.
Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
C.
Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
Answers
D.
Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
D.
Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
Answers
Suggested answer: A

Explanation:

Amazon Timestream is a serverless time series database service that allows you to store and analyze time series data at any scale1.It is well suited for gaming applications that generate high volumes of data from player events, such as scores, achievements, and actions2.Amazon ElastiCache for Redis is a fully managed in-memory data store that provides fast and scalable performance for applications that need sub-millisecond latency3. It can be used as a cache layer to store frequently accessed data, such as leaderboard results, and reduce the load on the database. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It can be used to process the data from Amazon Timestream and store the leaderboard results in Amazon ElastiCache for Redis. Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. It can be used to create a scheduled event that triggers the Lambda function once every minute, ensuring that the leaderboard data is updated regularly. The game server can then query the Redis cluster for the leaderboard data, which will be no more than 1 minute old.

Option B is incorrect because Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It is not designed for time series data, which requires efficient ingestion, compression, and querying of high-volume data streams. Option C is incorrect because Amazon Aurora is a relational database that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It is not optimized for time series data, which requires specialized indexing and partitioning techniques. Option D is incorrect because Amazon Neptune is a graph database that supports property graph and RDF models. It is not suitable for time series data, which requires high ingestion rates and temporal queries.

A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.

A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.

Which solution will meet these requirements?

A.
Use the STL SCAN view to understand how the tables are getting scanned. Identify the columns that are used in filter and group by conditions. Create a temporary table with the identified columns as sort keys and compression as Zstandard (ZSTD) by copying the data from the original table. Drop the original table. Give the temporary table the same name that the original table had.
A.
Use the STL SCAN view to understand how the tables are getting scanned. Identify the columns that are used in filter and group by conditions. Create a temporary table with the identified columns as sort keys and compression as Zstandard (ZSTD) by copying the data from the original table. Drop the original table. Give the temporary table the same name that the original table had.
Answers
B.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to RAW. Set the rest of the column compression encoding to AZ64.
B.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to RAW. Set the rest of the column compression encoding to AZ64.
Answers
C.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to I_ZO. Set the rest of the column compression encoding to Zstandard (ZSTD).
C.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to I_ZO. Set the rest of the column compression encoding to Zstandard (ZSTD).
Answers
D.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Create a deep copy of the table with the identified columns as sort keys and compression for all columns as Zstandard (ZSTD) by using a bulk insert. Drop the original table. Give the copy table the same name that the original table had.
D.
Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Create a deep copy of the table with the identified columns as sort keys and compression for all columns as Zstandard (ZSTD) by using a bulk insert. Drop the original table. Give the copy table the same name that the original table had.
Answers
Suggested answer: D

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

A.
Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.
A.
Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.
Answers
B.
Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.
B.
Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.
Answers
C.
Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.
C.
Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.
Answers
D.
Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.
D.
Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.
Answers
Suggested answer: B

Explanation:

According to the Amazon Redshift documentation1, you can enable database encryption for your clusters to help protect data at rest. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.


Total 321 questions
Go to page: of 33