ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 82

Question list
Search
Search

List of questions

Search

Related questions











A solutions architect needs to connect a company's corporate network to its VPC to allow on-premises access to its AWS resources. The solution must provide encryption of all traffic between the corporate network and the VPC at the network layer and the session layer. The solution also must provide security controls to prevent unrestricted access between AWS and the on-premises systems.

Which solution meets these requirements?

A.
Configure AWS Direct Connect to connect to the VPC. Configure the VPC route tables to allow and deny traffic between AWS and on premises as required.
A.
Configure AWS Direct Connect to connect to the VPC. Configure the VPC route tables to allow and deny traffic between AWS and on premises as required.
Answers
B.
Create an 1AM policy to allow access to the AWS Management Console only from a defined set of corporate IP addresses Restrict user access based on job responsibility by using an 1AM policy and roles
B.
Create an 1AM policy to allow access to the AWS Management Console only from a defined set of corporate IP addresses Restrict user access based on job responsibility by using an 1AM policy and roles
Answers
C.
Configure AWS Site-to-Site VPN to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.
C.
Configure AWS Site-to-Site VPN to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.
Answers
D.
Configure AWS Transit Gateway to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.
D.
Configure AWS Transit Gateway to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.
Answers
Suggested answer: C

Explanation:

This solution meets the requirements of providing encryption at both the network and session layers while also allowing for controlled access between on-premises systems and AWS resources.

AWS Site-to-Site VPN: This service allows you to establish a secure and encrypted connection between your on-premises network and AWS VPC over the internet or via AWS Direct Connect. The VPN encrypts data at the network layer (IPsec) as it travels between the corporate network and AWS.

Routing and Security Controls: By configuring route table entries, you can ensure that only the traffic intended for AWS resources is directed to the VPC. Additionally, by setting up security groups and network ACLs, you can further restrict and control which traffic is allowed to communicate with the instances within your VPC. This approach provides the necessary security to prevent unrestricted access, aligning with the company's security policies.

Why Not Other Options?:

Option A (AWS Direct Connect): While Direct Connect provides a private connection, it does not inherently provide encryption. Additional steps would be required to encrypt traffic, and it doesn't address the session layer encryption.

Option B (IAM policies for Console access): This option does not meet the requirement for network-level encryption and security between the corporate network and the VPC.

Option D (AWS Transit Gateway): Although Transit Gateway can help in managing multiple connections, it doesn't directly provide encryption at the network layer. You would still need to configure a VPN or use other methods for encryption.

AWS

Reference:

AWS Site-to-Site VPN - Overview of AWS Site-to-Site VPN capabilities, including encryption.

Security Groups and Network ACLs - Information on configuring security groups and network ACLs to control traffic.

A company is migrating its databases to Amazon RDS for PostgreSQL. The company is migrating its applications to Amazon EC2 instances. The company wants to optimize costs for long-running workloads.

Which solution will meet this requirement MOST cost-effectively?

A.
Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.
A.
Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.
Answers
B.
Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.
B.
Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.
Answers
C.
Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.
C.
Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.
Answers
D.
Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances.
D.
Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances.
Answers
Suggested answer: D

A company is implementing a new application on AWS. The company will run the application on multiple Amazon EC2 instances across multiple Availability Zones within multiple AWS Regions. The application will be available through the internet. Users will access the application from around the world.

The company wants to ensure that each user who accesses the application is sent to the EC2 instances that are closest to the user's location.

Which solution will meet these requirements?

A.
Implement an Amazon Route 53 geolocation routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
A.
Implement an Amazon Route 53 geolocation routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
Answers
B.
Implement an Amazon Route 53 geoproximity routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
B.
Implement an Amazon Route 53 geoproximity routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
Answers
C.
Implement an Amazon Route 53 multivalue answer routing policy Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
C.
Implement an Amazon Route 53 multivalue answer routing policy Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.
Answers
D.
Implement an Amazon Route 53 weighted routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
D.
Implement an Amazon Route 53 weighted routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.
Answers
Suggested answer: A

Explanation:

The requirement is to route users to the nearest AWS Region where the application is deployed. The best solution is to use Amazon Route 53 with a geolocation routing policy, which routes traffic based on the geographic location of the user making the request.

Geolocation Routing: This routing policy ensures that users are directed to the resources (in this case, EC2 instances) that are geographically closest to them, thereby reducing latency and improving the user experience.

Application Load Balancer (ALB): Within each Region, an internet-facing Application Load Balancer (ALB) is used to distribute incoming traffic across multiple EC2 instances in different Availability Zones. ALBs are designed to handle HTTP/HTTPS traffic and provide advanced features like content-based routing, SSL termination, and user authentication.

Why Not Other Options?:

Option B (Geoproximity + NLB): Geoproximity routing is similar but more complex as it requires fine-tuning the proximity settings. A Network Load Balancer (NLB) is better suited for TCP/UDP traffic rather than HTTP/HTTPS.

Option C (Multivalue Answer Routing + ALB): Multivalue answer routing does not direct traffic based on user location but rather returns multiple values and lets the client choose. This does not meet the requirement for geographically routing users.

Option D (Weighted Routing + NLB): Weighted routing splits traffic based on predefined weights and does not consider the user's geographic location. NLB is not ideal for this scenario due to its focus on lower-level protocols.

AWS

Reference:

Amazon Route 53 Routing Policies - Detailed explanation of the various routing policies available in Route 53, including geolocation.

Elastic Load Balancing - Information on the different types of load balancers in AWS and when to use them.

A company recently migrated a monolithic application to an Amazon EC2 instance and Amazon RDS. The application has tightly coupled modules. The existing design of the application gives the application the ability to run on only a single EC2 instance.

The company has noticed high CPU utilization on the EC2 instance during peak usage times. The high CPU utilization corresponds to degraded performance on Amazon RDS for read requests. The company wants to reduce the high CPU utilization and improve read request performance.

Which solution will meet these requirements?

A.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Configure an RDS read replica for read requests.
A.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Configure an RDS read replica for read requests.
Answers
B.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Add an RDS read replica and redirect all read/write traffic to the replica.
B.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Add an RDS read replica and redirect all read/write traffic to the replica.
Answers
C.
Configure an Auto Scaling group with a minimum size of 1 and maximum size of 2. Resize the RDS DB instance to an instance type that has more CPU capacity.
C.
Configure an Auto Scaling group with a minimum size of 1 and maximum size of 2. Resize the RDS DB instance to an instance type that has more CPU capacity.
Answers
D.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity Configure an Auto Scaling group with a minimum and maximum size of 1. Resize the RDS DB instance to an instance type that has more CPU capacity.
D.
Resize the EC2 instance to an EC2 instance type that has more CPU capacity Configure an Auto Scaling group with a minimum and maximum size of 1. Resize the RDS DB instance to an instance type that has more CPU capacity.
Answers
Suggested answer: A

Explanation:

To address the high CPU utilization on the EC2 instance and the degraded performance of Amazon RDS for read requests, the solution involves two key actions: resizing the EC2 instance and leveraging Amazon RDS read replicas.

Resizing the EC2 Instance: The first step is to resize the EC2 instance to a type with more CPU capacity to handle the higher computational demands during peak usage times. This helps to alleviate the immediate pressure on the CPU.

Auto Scaling Group with a Size of 1: Although the application can only run on a single EC2 instance due to its monolithic nature, creating an Auto Scaling group with a minimum and maximum size of 1 ensures that the instance is automatically restarted or replaced in case of failure, maintaining high availability.

RDS Read Replica: Configuring an RDS read replica allows the application to offload read requests to a separate instance, thus reducing the load on the primary RDS instance. This improves the performance of read operations, which were previously bottlenecked due to the high CPU usage on the EC2 instance.

Why Not Other Options?:

Option B: Redirecting all traffic to the RDS read replica is not recommended because replicas are meant for read traffic only, not for write operations. This could lead to data consistency issues.

Option C: Increasing the RDS instance type capacity helps, but it doesn't address the high CPU usage on the EC2 instance, nor does it provide a solution for scaling reads.

Option D: While resizing both the EC2 and RDS instances increases their capacities, it doesn't address the specific need to offload read traffic from the primary RDS instance.

AWS

Reference:

Amazon RDS Read Replicas - Explains how to create and use read replicas to offload read traffic from the primary database instance.

Resizing Your EC2 Instance - Guidance on resizing EC2 instances to meet workload demands.

A company is building an application on AWS that connects to an Amazon RDS database. The company wants to manage the application configuration and to securely store and retrieve credentials for the database and other services.

Which solution will meet these requirements with the LEAST administrative overhead?

A.
Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store and retrieve the credentials.
A.
Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store and retrieve the credentials.
Answers
B.
Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter Store to store and retrieve the credentials.
B.
Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter Store to store and retrieve the credentials.
Answers
C.
Use an encrypted application configuration file Store the file in Amazon S3 for the application configuration. Create another S3 file to store and retrieve the credentials.
C.
Use an encrypted application configuration file Store the file in Amazon S3 for the application configuration. Create another S3 file to store and retrieve the credentials.
Answers
D.
Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and retrieve the credentials.
D.
Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and retrieve the credentials.
Answers
Suggested answer: A

Explanation:

This solution meets the company's requirements with minimal administrative overhead and ensures security and ease of management.

AWS AppConfig: AWS AppConfig is a service designed to manage application configuration in a secure and validated way. It allows you to deploy configurations safely and quickly without affecting the application's performance or availability.

AWS Secrets Manager: AWS Secrets Manager is specifically designed to manage, retrieve, and rotate credentials for databases and other services. It integrates seamlessly with AWS services like Amazon RDS, making it an ideal solution for securely storing and retrieving database credentials. Secrets Manager also provides automatic rotation of credentials, reducing the operational burden.

Why Not Other Options?:

Option B (AWS Lambda + Parameter Store): While AWS Lambda can be used for managing configurations and AWS Systems Manager Parameter Store can store credentials, this approach involves more manual setup and does not offer the same level of integrated management and security as AppConfig and Secrets Manager.

Option C (Encrypted S3 Configuration File): Storing configuration and credentials in S3 files involves more manual management and security considerations, increasing the administrative overhead.

Option D (AppConfig + RDS for credentials): RDS is not designed for storing application credentials; it's better suited for managing database instances and their configurations.

AWS

Reference:

AWS AppConfig - Describes how to use AWS AppConfig for managing application configurations.

AWS Secrets Manager - Provides details on securely storing and retrieving credentials using AWS Secrets Manager.

A company stores data in an on-premises Oracle relational database. The company needs to make the data available in Amazon Aurora PostgreSQL for analysis The company uses an AWS Site-to-Site VPN connection to connect its on-premises network to AWS.

The company must capture the changes that occur to the source database during the migration to Aurora PostgreSQL.

Which solution will meet these requirements?

A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
A.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
Answers
B.
Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
B.
Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
Answers
C.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
C.
Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
Answers
D.
Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
D.
Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
Answers
Suggested answer: C

Explanation:

For the migration of data from an on-premises Oracle database to Amazon Aurora PostgreSQL, this solution effectively handles schema conversion, data migration, and ongoing data replication.

AWS Schema Conversion Tool (SCT): SCT is used to convert the Oracle database schema to a format compatible with Aurora PostgreSQL. This tool automatically converts the database schema and code objects, like stored procedures, to the target database engine.

AWS Database Migration Service (DMS): DMS is employed to perform the data migration. It supports both full-load migrations (for initial data transfer) and continuous replication of ongoing changes (Change Data Capture, or CDC). This ensures that any updates to the Oracle database during the migration are captured and applied to the Aurora PostgreSQL database, minimizing downtime.

Why Not Other Options?:

Option A (SCT + DMS full-load only): This option does not capture ongoing changes, which is crucial for a live database migration to ensure data consistency.

Option B (DataSync + S3): AWS DataSync is more suited for file transfers rather than database migrations, and it doesn't support ongoing change replication.

Option D (Snowball + S3): Snowball is typically used for large-scale data transfers that don't require continuous synchronization, making it less suitable for this scenario where ongoing changes must be captured.

AWS

Reference:

AWS Schema Conversion Tool - Guidance on using SCT for database schema conversions.

AWS Database Migration Service - Detailed documentation on using DMS for data migrations and ongoing replication.

A financial services company plans to launch a new application on AWS to handle sensitive financial transactions. The company will deploy the application on Amazon EC2 instances. The company will use Amazon RDS for MySQL as the database. The company's security policies mandate that data must be encrypted at rest and in transit.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.
A.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.
Answers
B.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit
B.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit
Answers
C.
Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.
C.
Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.
Answers
D.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys Configure a VPN connection to enable private connectivity to encrypt data in transit.
D.
Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys Configure a VPN connection to enable private connectivity to encrypt data in transit.
Answers
Suggested answer: A

Explanation:

This solution provides encryption at rest and in transit with the least operational overhead while adhering to the company's security policies.

Encryption at Rest: Amazon RDS for MySQL can be configured to encrypt data at rest by using AWS Key Management Service (KMS) managed keys. This encryption is applied automatically to all data stored on disk, including backups, read replicas, and snapshots. This solution requires minimal operational overhead because AWS manages the encryption and key management process.

Encryption in Transit: AWS Certificate Manager (ACM) allows you to provision, manage, and deploy SSL/TLS certificates seamlessly. These certificates can be used to encrypt data in transit by configuring the MySQL instance to use SSL/TLS for connections. This setup ensures that data is encrypted between the application and the database, protecting it from interception during transmission.

Why Not Other Options?:

Option B (IPsec tunnels): While IPsec tunnels encrypt data in transit, they are more complex to manage and require additional configuration and maintenance, leading to higher operational overhead.

Option C (Third-party application-level encryption): Implementing application-level encryption adds complexity, requires code changes, and increases operational overhead.

Option D (VPN for encryption): A VPN solution for encrypting data in transit is unnecessary and adds additional complexity without providing any benefit over SSL/TLS, which is simpler to implement and manage.

AWS

Reference:

Amazon RDS Encryption - Information on how to configure and use encryption for Amazon RDS.

AWS Certificate Manager (ACM) - Details on using ACM to manage SSL/TLS certificates for securing data in transit.

A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website serves only a small amount of traffic. The company is concerned about the reliability of the instance and needs to migrate to a highly available architecture. The company cannot modify the application code.

Which combination of actions should a solutions architect take to achieve high availability for the website? (Select TWO.)

A.
Provision an internet gateway in each Availability Zone in use.
A.
Provision an internet gateway in each Availability Zone in use.
Answers
B.
Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
B.
Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
Answers
C.
Migrate the database to Amazon DynamoDB. and enable DynamoDB auto scaling.
C.
Migrate the database to Amazon DynamoDB. and enable DynamoDB auto scaling.
Answers
D.
Use AWS DataSync to synchronize the database data across multiple EC2 instances.
D.
Use AWS DataSync to synchronize the database data across multiple EC2 instances.
Answers
E.
Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.
E.
Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.
Answers
Suggested answer: B, E

Explanation:

To achieve high availability for the website, two key actions should be taken:

Amazon RDS for MySQL Multi-AZ: By migrating the database to an RDS for MySQL Multi-AZ deployment, the database becomes highly available. Multi-AZ provides automatic failover from the primary database to a standby replica in another Availability Zone, ensuring database availability even in the case of an AZ failure.

Application Load Balancer and Auto Scaling: Deploying an Application Load Balancer (ALB) in front of the EC2 instances ensures that traffic is evenly distributed across the instances. Configuring an Auto Scaling group to run EC2 instances across multiple Availability Zones ensures that the application remains available even if one instance or one AZ becomes unavailable. This setup enhances fault tolerance and improves reliability.

Why Not Other Options?:

Option A (Internet Gateway per AZ): Internet Gateways are region-wide resources and do not need to be provisioned per Availability Zone. This option does not contribute to high availability.

Option C (DynamoDB + Auto Scaling): DynamoDB would require changes to the application code to switch from MySQL, which is not possible per the question's constraints.

Option D (DataSync): AWS DataSync is used for data transfer and synchronization, not for achieving high availability for a database.

AWS

Reference:

Amazon RDS Multi-AZ Deployments - Explanation of how Multi-AZ deployments work in Amazon RDS.

Application Load Balancing - Details on how to configure and use ALB for distributing traffic across multiple instances.

A company runs a Node.js function on a server in its on-premises data center. The data center stores data in a PostgreSQL database. The company stores the credentials in a connection string in an environment variable on the server. The company wants to migrate its application to AWS and to replace the Node.js application server with AWS Lambda. The company also wants to migrate to Amazon RDS for PostgreSQL and to ensure that the database credentials are securely managed.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.
A.
Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.
Answers
B.
Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.
B.
Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.
Answers
C.
Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.
C.
Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.
Answers
D.
Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS key.
D.
Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS key.
Answers
Suggested answer: B

Explanation:

AWS Secrets Manager is designed specifically to securely store and manage sensitive information such as database credentials. It integrates seamlessly with AWS services like Lambda and RDS, and it provides automatic credential rotation with minimal operational overhead.

AWS Secrets Manager: By storing the database credentials in Secrets Manager, you ensure that the credentials are securely stored, encrypted, and managed. Secrets Manager provides a built-in mechanism to automatically rotate credentials at regular intervals (e.g., every 30 days), which helps in maintaining security best practices without requiring additional manual intervention.

Lambda Integration: The Lambda function can be easily configured to retrieve the credentials from Secrets Manager using the AWS SDK, ensuring that the credentials are accessed securely at runtime.

Why Not Other Options?:

Option A (Parameter Store with Rotation): While Parameter Store can store parameters securely, Secrets Manager is more tailored for secrets management and automatic rotation, offering more features and less operational overhead.

Option C (Encrypted Lambda environment variable): Storing credentials directly in Lambda environment variables, even when encrypted, requires custom code to manage rotation, which increases operational complexity.

Option D (KMS with automatic rotation): KMS is for managing encryption keys, not for storing and rotating secrets like database credentials. This option would require more custom implementation to manage credentials securely.

AWS

Reference:

AWS Secrets Manager - Detailed documentation on how to store, manage, and rotate secrets using AWS Secrets Manager.

Using Secrets Manager with AWS Lambda - Guidance on integrating Secrets Manager with Lambda for secure credential management.

A company has migrated several applications to AWS in the past 3 months. The company wants to know the breakdown of costs for each of these applications. The company wants to receive a regular report that Includes this Information.

Which solution will meet these requirements MOST cost-effectively?

A.
Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information.
A.
Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information.
Answers
B.
Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to gel the desired information.
B.
Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to gel the desired information.
Answers
C.
Tag all the AWS resources with a key for cost and a value of the application's name. Activate cost allocation tags Use Cost Explorer to get the desired information.
C.
Tag all the AWS resources with a key for cost and a value of the application's name. Activate cost allocation tags Use Cost Explorer to get the desired information.
Answers
D.
Tag all the AWS resources with a key for cost and a value of the application's name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.
D.
Tag all the AWS resources with a key for cost and a value of the application's name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.
Answers
Suggested answer: C

Explanation:

This solution is the most cost-effective and efficient way to break down costs per application.

Tagging Resources: By tagging all AWS resources with a specific key (e.g., 'cost') and a value representing the application's name, you can easily identify and categorize costs associated with each application. This tagging strategy allows for granular tracking of costs within AWS.

Activating Cost Allocation Tags: Once tags are applied to resources, you need to activate cost allocation tags in the AWS Billing and Cost Management console. This ensures that the costs associated with each tag are included in your billing reports and can be used for cost analysis.

AWS Cost Explorer: Cost Explorer is a powerful tool that allows you to visualize, understand, and manage your AWS costs and usage over time. You can filter and group your cost data by the tags you've applied to resources, enabling you to easily see the cost breakdown for each application. Cost Explorer also supports generating regular reports, which can be scheduled and emailed to stakeholders.

Why Not Other Options?:

Option A (AWS Budgets): AWS Budgets is more focused on setting cost and usage thresholds and monitoring them, rather than providing detailed cost breakdowns by application.

Option B (Load Cost and Usage Reports into RDS): This approach is less cost-effective and involves more operational overhead, as it requires setting up and maintaining an RDS instance and running SQL queries.

Option D (AWS Billing and Cost Management Console): While you can download bills, this method is more manual and less dynamic compared to using Cost Explorer with activated tags.

AWS

Reference:

AWS Tagging Strategies - Overview of how to use tagging to organize and track AWS resources.

AWS Cost Explorer - Details on how to use Cost Explorer to analyze costs.

Total 886 questions
Go to page: of 89