ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 77

Question list
Search
Search

List of questions

Search

Related questions











A company uses Amazon RDS with default backup settings for Its database tier The company needs to make a dally backup of the database to meet regulatory requirements. The company must retain the backups (or 30 days.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Write an AWS Lambda function to create an RDS snapshot every day.
A.
Write an AWS Lambda function to create an RDS snapshot every day.
Answers
B.
Modify the RDS database lo have a retention period of 30 days for automated backups.
B.
Modify the RDS database lo have a retention period of 30 days for automated backups.
Answers
C.
Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
C.
Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
Answers
D.
Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention period.
D.
Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention period.
Answers
Suggested answer: B

Explanation:

Current Backup Settings: By default, Amazon RDS creates automated backups with a retention period of 7 days.

Regulatory Requirements: The requirement is to retain daily backups for 30 days.

Adjusting Retention Period: You can modify the RDS instance settings to increase the automated backup retention period to 30 days.

Operational Overhead: This solution is the simplest as it leverages existing automated backups and requires minimal intervention.

Implementation: The change can be made via the AWS Management Console, AWS CLI, or AWS SDKs.

Reference

Amazon RDS Backups: Amazon RDS Documentation

A company is migrating a document management application to AWS. The application runs on Linux servers. The company will migrate the application to Amazon EC2 instances in an Auto Scaling group. The company stores 7 TiB of documents in a shared storage file system. An external relational database tracks the documents.

Documents are stored once and can be retrieved multiple times for reference at any time. The company cannot modify the application during the migration. The storage solution must be highly available and must support scaling over time.

Which solution will meet these requirements MOST cost-effectively?

A.
Deploy an EC2 instance with enhanced networking as a shared NFS storage system. Export the NFS share. Mount the NFS share on the EC2 instances in the Auto Scaling group.
A.
Deploy an EC2 instance with enhanced networking as a shared NFS storage system. Export the NFS share. Mount the NFS share on the EC2 instances in the Auto Scaling group.
Answers
B.
Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Mount the S3 bucket on the EC2 instances in the Auto Scaling group.
B.
Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Mount the S3 bucket on the EC2 instances in the Auto Scaling group.
Answers
C.
Deploy an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group to connect to the SFTP server.
C.
Deploy an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group to connect to the SFTP server.
Answers
D.
Create an Amazon.. System (Amazon fcFS) file system with mount points in multiple Availability Zones. Use the bFS Stondard-intrcqucnt Access (Standard-IA) storage class. Mount the NFS share on the EC2 instances in the Auto Scaling group.
D.
Create an Amazon.. System (Amazon fcFS) file system with mount points in multiple Availability Zones. Use the bFS Stondard-intrcqucnt Access (Standard-IA) storage class. Mount the NFS share on the EC2 instances in the Auto Scaling group.
Answers
Suggested answer: D

Explanation:

Requirement Analysis: The company needs highly available, scalable storage for a document management application without modifying the application during migration.

EFS Overview: Amazon EFS provides scalable file storage that can be mounted concurrently on multiple EC2 instances across different Availability Zones.

EFS Standard-IA: Using the Standard-IA storage class helps reduce costs for infrequently accessed data while maintaining high availability and scalability.

Implementation:

Create an EFS file system.

Configure mount targets in multiple Availability Zones to ensure high availability.

Mount the EFS file system on EC2 instances in the Auto Scaling group.

Conclusion: This solution meets the high availability, scalability, and cost-effectiveness requirements without needing application modifications.

Reference

Amazon EFS: Amazon EFS Documentation

EFS Storage Classes: Amazon EFS Storage Classes

A company uses an Amazon Aurora PostgreSQL provisioned cluster with its application. The application's peak traffic occurs several times a day for periods of 30 minutes to several hours.

The database capacity is provisioned to handle peak traffic from the application, but the database has wasted capacity during non-peak hours. The company wants to reduce the database costs.

Which solution will meet these requirements with the LEAST operational effort?

A.
Set up an Amazon CloudWatch alarm to monitor database utilization. Scale up or scale down the database capacity based on the amount of traffic.
A.
Set up an Amazon CloudWatch alarm to monitor database utilization. Scale up or scale down the database capacity based on the amount of traffic.
Answers
B.
Migrate the database to Amazon EC2 instances in on Auto Scaling group. Increase or decrease the number of instances based on the amount of traffic.
B.
Migrate the database to Amazon EC2 instances in on Auto Scaling group. Increase or decrease the number of instances based on the amount of traffic.
Answers
C.
Migrate the database to an Amazon Aurora Serverless DB cluster to scale up or scale down the capacity based on the amount of traffic.
C.
Migrate the database to an Amazon Aurora Serverless DB cluster to scale up or scale down the capacity based on the amount of traffic.
Answers
D.
Schedule an AWS Lambda function to provision the required database capacity at the start of each day. Schedule another Lambda function to reduce the capacity at the end of each day.
D.
Schedule an AWS Lambda function to provision the required database capacity at the start of each day. Schedule another Lambda function to reduce the capacity at the end of each day.
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The database experiences peak traffic multiple times a day but has wasted capacity during non-peak hours. The goal is to reduce costs with minimal operational effort.

Aurora Serverless Overview: Aurora Serverless automatically adjusts database capacity based on current demand, scaling up during peak times and scaling down during non-peak times.

Cost Efficiency: Aurora Serverless charges only for the capacity used, which is more cost-effective than provisioning for peak traffic.

Operational Efficiency: Aurora Serverless eliminates the need for manual scaling or scheduling Lambda functions for capacity management.

Implementation: Migrate the database from the provisioned Aurora PostgreSQL cluster to an Aurora Serverless cluster.

Reference

Amazon Aurora Serverless: Aurora Serverless Documentation

A solutions architect is designing an asynchronous application to process credit card data validation requests for a bank. The application must be secure and be able to process each request at least once.

Which solution will meet these requirements MOST cost-effectively?

A.
Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt permission for the Lambda execution role.
A.
Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt permission for the Lambda execution role.
Answers
B.
Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the encryption key invocation permission for the Lambda function.
B.
Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the encryption key invocation permission for the Lambda function.
Answers
C.
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda execution role.
C.
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda execution role.
Answers
D.
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation permission for the Lambda function.
D.
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation permission for the Lambda function.
Answers
Suggested answer: B

Explanation:

Requirement Analysis: The application must process each credit card data validation request at least once, securely and cost-effectively.

SQS FIFO Queues: Ensures that each message is processed exactly once and in the exact order sent.

AWS Lambda: Using Lambda for event-driven processing ensures scalability and cost-efficiency.

SSE-SQS: Provides encryption at rest using SQS-managed keys, simplifying encryption management.

Implementation:

Set up SQS FIFO queues as the event source for Lambda.

Enable SSE-SQS for encryption.

Ensure the Lambda execution role has the necessary permissions to use the encryption keys.

Conclusion: This combination meets the requirements of security, exact-once processing, and cost-effectiveness.

Reference

Amazon SQS: Amazon SQS Documentation

AWS Lambda with SQS: Using AWS Lambda with Amazon SQS

A company runs a self-managed Microsoft SOL Server on Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS). Daily snapshots are taken of the EBS volumes.

Recently, all the company's EBS snapshots were accidentally deleted while running a snapshot cleaning script that deletes all expired EBS snapshots. A solutions architect needs to update the architecture to prevent data loss without retaining EBS snapshots indefinitely.

Which solution will meet these requirements with the LEAST development effort?

A.
Change the 1AM policy of the user to deny EBS snapshot deletion.
A.
Change the 1AM policy of the user to deny EBS snapshot deletion.
Answers
B.
Copy the EBS snapshots to another AWS Region after completing the snapshots daily.
B.
Copy the EBS snapshots to another AWS Region after completing the snapshots daily.
Answers
C.
Create a 7-day EBS snapshot retention rule in Recycle Bin and apply the rule for all snapshots.
C.
Create a 7-day EBS snapshot retention rule in Recycle Bin and apply the rule for all snapshots.
Answers
D.
Copy EBS snapshots to Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D.
Copy EBS snapshots to Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The goal is to prevent accidental deletion of EBS snapshots while avoiding indefinite retention.

Recycle Bin for EBS Snapshots: AWS Recycle Bin allows for retention rules that prevent immediate deletion of snapshots, providing a safety net against accidental deletions.

Retention Rule: A 7-day retention rule ensures snapshots are not permanently deleted immediately, giving time to recover from accidental deletions.

Implementation:

Enable Recycle Bin in your AWS account.

Create a retention rule that specifies a 7-day period for EBS snapshots.

Apply this rule to all EBS snapshots.

Conclusion: This solution provides an automated way to prevent data loss from accidental deletions with minimal development effort.

Reference

AWS Recycle Bin: AWS Recycle Bin Documentation

A video game company is deploying a new gaming application to its global users. The company requires a solution that will provide near real-time reviews and rankings of the players.

A solutions architect must design a solution to provide fast access to the data. The solution must also ensure the data persists on disks in the event that the company restarts the application.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. Store the player data in the S3 bucket.
A.
Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. Store the player data in the S3 bucket.
Answers
B.
Create Amazon EC2 instances in multiple AWS Regions. Store the player data on the EC2 instances. Configure Amazon Route 53 with geolocation records to direct users to the closest EC2 instance.
B.
Create Amazon EC2 instances in multiple AWS Regions. Store the player data on the EC2 instances. Configure Amazon Route 53 with geolocation records to direct users to the closest EC2 instance.
Answers
C.
Deploy an Amazon ElastiCache for Redis cluster. Store the player data in the ElastiCache cluster.
C.
Deploy an Amazon ElastiCache for Redis cluster. Store the player data in the ElastiCache cluster.
Answers
D.
Deploy an Amazon ElastiCache for Memcached cluster. Store the player data in the ElastiCache cluster.
D.
Deploy an Amazon ElastiCache for Memcached cluster. Store the player data in the ElastiCache cluster.
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The application needs near real-time access to data, persistence, and minimal operational overhead.

ElastiCache for Redis: Provides in-memory data storage with persistence, supporting fast access and durability.

Operational Overhead: Managed service reduces the burden of setup, maintenance, and scaling.

Implementation:

Deploy an ElastiCache for Redis cluster.

Configure Redis to persist data to disk using AOF (Append-Only File) or RDB (Redis Database Backup) snapshots.

Conclusion: ElastiCache for Redis meets the requirements for fast access, data persistence, and low operational overhead.

Reference

Amazon ElastiCache: ElastiCache for Redis Documentation

A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.

What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?

A.
Create a DX connection in each new account. Route the network traffic to the on-premises servers.
A.
Create a DX connection in each new account. Route the network traffic to the on-premises servers.
Answers
B.
Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
B.
Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
Answers
C.
Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
C.
Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
Answers
D.
Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
D.
Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
Answers
Suggested answer: D

Explanation:

Requirement Analysis: Need quick, cost-effective, and consistent access to on-premises network services from multiple AWS accounts.

AWS Transit Gateway: Centralizes and simplifies network management by connecting VPCs and on-premises networks.

Direct Connect Integration: Assigning DX to the transit gateway ensures consistent and high-performance connectivity.

Operational Overhead: Minimal because Transit Gateway simplifies routing and management.

Implementation:

Set up AWS Transit Gateway.

Connect new AWS accounts to the Transit Gateway.

Route traffic through Transit Gateway to on-premises servers via Direct Connect.

Conclusion: This solution provides a scalable, cost-effective, and low-overhead method to meet connectivity requirements.

Reference

AWS Transit Gateway: AWS Transit Gateway Documentation

A company needs to design a hybrid network architecture The company's workloads are currently stored in the AWS Cloud and in on-premises data centers The workloads require single-digit latencies to communicate The company uses an AWS Transit Gateway transit gateway to connect multiple VPCs

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

A.
Establish an AWS Site-to-Site VPN connection to each VPC.
A.
Establish an AWS Site-to-Site VPN connection to each VPC.
Answers
B.
Associate an AWS Direct Connect gateway with the transit gateway that is attached to the VPCs.
B.
Associate an AWS Direct Connect gateway with the transit gateway that is attached to the VPCs.
Answers
C.
Establish an AWS Site-to-Site VPN connection to an AWS Direct Connect gateway.
C.
Establish an AWS Site-to-Site VPN connection to an AWS Direct Connect gateway.
Answers
D.
Establish an AWS Direct Connect connection. Create a transit virtual interface (VIF) to a Direct Connect gateway.
D.
Establish an AWS Direct Connect connection. Create a transit virtual interface (VIF) to a Direct Connect gateway.
Answers
E.
Associate AWS Site-to-Site VPN connections with the transit gateway that is attached to the VPCs
E.
Associate AWS Site-to-Site VPN connections with the transit gateway that is attached to the VPCs
Answers
Suggested answer: B, D

Explanation:

AWS Direct Connect: Provides a dedicated network connection from your on-premises data center to AWS, ensuring low latency and consistent network performance.

Direct Connect Gateway Association:

Direct Connect Gateway: Acts as a global network transit hub to connect VPCs across different AWS regions.

Association with Transit Gateway: Enables communication between on-premises data centers and multiple VPCs connected to the transit gateway.

Transit Virtual Interface (VIF):

Create Transit VIF: To connect Direct Connect with a transit gateway.

Setup Steps:

Establish a Direct Connect connection.

Create a transit VIF to the Direct Connect gateway.

Associate the Direct Connect gateway with the transit gateway attached to the VPCs.

Cost Efficiency: This combination avoids the recurring costs and potential performance variability of VPN connections, providing a robust, low-latency hybrid network solution.

AWS Direct Connect

Transit Gateway and Direct Connect Gateway

A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database Compliance regulations mandate that all personally identifiable information (Pll) be encrypted at rest.

Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?

A.
Deploy AWS Certificate Manager to generate certificates Use the certificates to encrypt the database volume
A.
Deploy AWS Certificate Manager to generate certificates Use the certificates to encrypt the database volume
Answers
B.
Deploy AWS CloudHSM. generate encryption keys, and use the keys to encrypt database volumes.
B.
Deploy AWS CloudHSM. generate encryption keys, and use the keys to encrypt database volumes.
Answers
C.
Configure SSL encryption using AWS Key Management Service {AWS KMS) keys to encrypt database volumes.
C.
Configure SSL encryption using AWS Key Management Service {AWS KMS) keys to encrypt database volumes.
Answers
D.
Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
D.
Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
Answers
Suggested answer: D

Explanation:

EBS Encryption:

Default EBS Encryption: Can be enabled for new EBS volumes.

Use of AWS KMS: Specify AWS KMS keys to handle encryption and decryption of data transparently.

Amazon RDS Encryption:

RDS Encryption: Encrypts the underlying storage for RDS instances using AWS KMS.

Configuration: Enable encryption when creating the RDS instance or modify an existing instance to enable encryption.

Least Amount of Changes:

Both EBS and RDS support seamless encryption with AWS KMS, requiring minimal changes to the existing infrastructure.

Enables compliance with regulatory requirements without modifying the application.

Operational Efficiency: Using AWS KMS for both EBS and RDS ensures a consistent, managed approach to encryption, simplifying key management and enhancing security.

Amazon EBS Encryption

Amazon RDS Encryption

AWS Key Management Service

A global ecommerce company runs its critical workloads on AWS. The workloads use an Amazon RDS for PostgreSQL DB instance that is configured for a Multi-AZ deployment.

Customers have reported application timeouts when the company undergoes database failovers. The company needs a resilient solution to reduce failover time

Which solution will meet these requirements?

A.
Create an Amazon RDS Proxy. Assign the proxy to the DB instance.
A.
Create an Amazon RDS Proxy. Assign the proxy to the DB instance.
Answers
B.
Create a read replica for the DB instance Move the read traffic to the read replica.
B.
Create a read replica for the DB instance Move the read traffic to the read replica.
Answers
C.
Enable Performance Insights. Monitor the CPU load to identify the timeouts.
C.
Enable Performance Insights. Monitor the CPU load to identify the timeouts.
Answers
D.
Take regular automatic snapshots Copy the automatic snapshots to multiple AWS Regions
D.
Take regular automatic snapshots Copy the automatic snapshots to multiple AWS Regions
Answers
Suggested answer: A

Explanation:

Amazon RDS Proxy: RDS Proxy is a fully managed, highly available database proxy that makes applications more resilient to database failures by pooling and sharing connections, and it can automatically handle database failovers.

Reduced Failover Time: By using RDS Proxy, the connection management between the application and the database is improved, reducing failover times significantly. RDS Proxy maintains connections in a connection pool and reduces the time required to re-establish connections during a failover.

Configuration:

Create an RDS Proxy instance.

Configure the proxy to connect to the RDS for PostgreSQL DB instance.

Modify the application configuration to use the RDS Proxy endpoint instead of the direct database endpoint.

Operational Benefits: This solution provides high availability and reduces application timeouts during failovers with minimal changes to the application code.

Amazon RDS Proxy

Setting Up RDS Proxy

Total 886 questions
Go to page: of 89