ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 74

Question list
Search
Search

List of questions

Search

Related questions











A company recently migrated its web application to the AWS Cloud The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions.

The company wants to redesign the architecture to be highly available and to use AWS managed solutions Which solution will meet these requirements?

A.
Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet Assign a public IP address.
A.
Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet Assign a public IP address.
Answers
B.
Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information
B.
Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information
Answers
C.
Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled Configure the ElastiCache for Redis cluster in cluster mode Copy the frontend resources to Amazon S3 Configure the backend code to reference the EC2 instance
C.
Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled Configure the ElastiCache for Redis cluster in cluster mode Copy the frontend resources to Amazon S3 Configure the backend code to reference the EC2 instance
Answers
D.
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones
D.
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The company needs to redesign the architecture to be highly available and use AWS managed solutions for hosting a web application with static content, PHP application, and Redis for user sessions.

Analysis of Options:

AWS Elastic Beanstalk: Suitable for simplifying deployment but may not provide the desired flexibility and control for complex architectures.

AWS Lambda and API Gateway: Not ideal for hosting a stateful PHP application and handling static content. Adding complexity without significant benefit.

EC2 instance with ElastiCache and S3: Provides some high availability but involves managing EC2 instances, which increases operational overhead.

CloudFront with S3, ALB, ECS with Fargate, and ElastiCache: This solution leverages fully managed AWS services for each component, ensuring high availability and scalability.

Best Solution:

CloudFront with S3, ALB, ECS with Fargate, and ElastiCache: This combination of services meets the requirements for a highly available and managed solution, ensuring optimal performance and minimal operational overhead.

Amazon CloudFront

Amazon S3

Amazon ECS with Fargate

Amazon ElastiCache for Redis

A company has an application that customers use to upload images to an Amazon S3 bucket Each night, the company launches an Amazon EC2 Spot Fleet that processes all the images that the company received that day. The processing for each image takes 2 minutes and requires 512 MB of memory.

A solutions architect needs to change the application to process the images when the images are uploaded

Which change will meet these requirements MOST cost-effectively?

A.
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images
A.
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images
Answers
B.
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue Configure an EC2 Reserved Instance to read the messages from the queue and to process the images.
B.
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue Configure an EC2 Reserved Instance to read the messages from the queue and to process the images.
Answers
C.
Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container Service (Amazon ECS) to subscribe to the topic and to process the images.
C.
Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container Service (Amazon ECS) to subscribe to the topic and to process the images.
Answers
D.
Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. to subscribe to the topic and to process the images.
D.
Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. to subscribe to the topic and to process the images.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to process images as they are uploaded to S3 in a cost-effective manner, currently using an EC2 Spot Fleet for nightly processing.

Analysis of Options:

S3 Event Notifications to SQS and Lambda: This setup allows for event-driven processing with Lambda, which scales automatically based on the number of messages in the queue. It is cost-effective as Lambda charges are based on the compute time used.

S3 Event Notifications to SQS and EC2 Reserved Instance: Involves managing EC2 instances, which adds operational overhead and is less cost-effective.

S3 Event Notifications to SNS and ECS: More complex and potentially less cost-effective compared to using Lambda for simple processing tasks.

S3 Event Notifications to SNS: Requires additional configuration and management to process messages.

Best Solution:

S3 Event Notifications to SQS and Lambda: This option is the most cost-effective and scalable, leveraging AWS managed services with minimal operational overhead.

Amazon S3 Event Notifications

Amazon SQS

AWS Lambda

A company's software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster will serve as a backend for a desktop client that is deployed on premises. The desktop client requires direct connectivity to the RDS cluster.

The company must give the development team the ability to connect to the cluster by using the client when the team is in the office.

Which solution provides the required connectivity MOST securely?

A.
Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.
A.
Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.
Answers
B.
Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.
B.
Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.
Answers
C.
Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company's office IP ranges to access the cluster.
C.
Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company's office IP ranges to access the cluster.
Answers
D.
Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.
D.
Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.
Answers
Suggested answer: B

Explanation:

Requirement Analysis: Need secure, direct connectivity from an on-premises client to an RDS cluster, accessible only when in the office.

VPC with Private Subnets: Ensures the RDS cluster is not publicly accessible, enhancing security.

Site-to-Site VPN: Provides secure, encrypted connection between on-premises office and AWS VPC.

Implementation:

Create a VPC with two private subnets.

Launch the RDS cluster in the private subnets.

Set up a Site-to-Site VPN connection with a customer gateway in the office.

Conclusion: This setup ensures secure and direct connectivity with minimal exposure, meeting the requirement for secure access from the office.

Reference

AWS Site-to-Site VPN: AWS Site-to-Site VPN Documentation

Amazon RDS: Amazon RDS Documentation

A social media company wants to store its database of user profiles, relationships, and interactions in the AWS Cloud. The company needs an application to monitor any changes in the database. The application needs to analyze the relationships between the data entities and to provide recommendations to users.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
A.
Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
Answers
B.
Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
B.
Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
Answers
C.
Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
C.
Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes in the database.
Answers
D.
Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to process changes in the database.
D.
Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to process changes in the database.
Answers
Suggested answer: B

Explanation:

Amazon Neptune: Neptune is a fully managed graph database service that is optimized for storing and querying highly connected data. It supports both property graph and RDF graph models, making it suitable for applications that need to analyze relationships between data entities.

Neptune Streams: Neptune Streams captures changes to the graph and streams these changes to other AWS services. This is useful for applications that need to monitor and respond to changes in real-time, such as providing recommendations based on user interactions and relationships.

Least Operational Overhead: Using Neptune Streams directly with Amazon Neptune ensures that the solution is tightly integrated, reducing the need for additional components and minimizing operational overhead. This integration simplifies the architecture by eliminating the need for a separate service like Kinesis for change processing.

Amazon Neptune Documentation

Neptune Streams Documentation

A company uses an Amazon S3 bucket as its data lake storage platform The S3 bucket contains a massive amount of data that is accessed randomly by multiple teams and hundreds of applications. The company wants to reduce the S3 storage costs and provide immediate availability for frequently accessed objects

What is the MOST operationally efficient solution that meets these requirements?

A.
Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class
A.
Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class
Answers
B.
Store objects in Amazon S3 Glacier Use S3 Select to provide applications with access to the data.
B.
Store objects in Amazon S3 Glacier Use S3 Select to provide applications with access to the data.
Answers
C.
Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
C.
Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
Answers
D.
Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Create an AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an application
D.
Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Create an AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an application
Answers
Suggested answer: A

Explanation:

Amazon S3 Intelligent-Tiering: This storage class is designed to optimize costs by automatically moving data between two access tiers (frequent and infrequent) when access patterns change. It provides cost savings without performance impact or operational overhead.

S3 Lifecycle Rules: By creating an S3 Lifecycle rule, the company can automatically transition objects to the Intelligent-Tiering storage class. This eliminates the need for manual intervention and ensures that objects are moved to the most cost-effective storage tier based on their access patterns.

Operational Efficiency: Intelligent-Tiering requires no additional management and delivers immediate availability for frequently accessed objects. This makes it the most operationally efficient solution for the given requirements.

Amazon S3 Intelligent-Tiering

S3 Lifecycle Policies

A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.

The company must store the files for 4 years before the files can be deleted The files must be immediately accessible The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.

Which solution will meet these requirements MOST cost-effectively?

A.
Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
A.
Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
Answers
B.
Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation Delete the files 4 years after object creation.
B.
Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation Delete the files 4 years after object creation.
Answers
C.
Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation Delete the files 4 years after object creation.
C.
Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation Delete the files 4 years after object creation.
Answers
D.
Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
D.
Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
Answers
Suggested answer: C

Explanation:

Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.

Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.

Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.

Amazon S3 Storage Classes

S3 Lifecycle Configuration

A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output as an object to Amazon S3.

Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the NAT instance's network The company wants to access Amazon S3 without traversing the internet.

Which solution will meet these requirements?

A.
Replace the EC2 NAT instance with an AWS managed NAT gateway.
A.
Replace the EC2 NAT instance with an AWS managed NAT gateway.
Answers
B.
Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type
B.
Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type
Answers
C.
Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.
C.
Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.
Answers
D.
Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.
D.
Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.
Answers
Suggested answer: C

Explanation:

Gateway Endpoint for Amazon S3: A VPC endpoint for Amazon S3 allows you to privately connect your VPC to Amazon S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Provisioning the Endpoint:

Navigate to the VPC Dashboard.

Select 'Endpoints' and create a new endpoint.

Choose the service name for S3 (com.amazonaws.region.s3).

Select the appropriate VPC and subnets.

Adjust the route tables of the subnets to include the new endpoint.

Update Route Tables: Modify the route tables of the subnets to direct traffic destined for S3 to the newly created endpoint. This ensures that traffic to S3 does not go through the NAT instance, avoiding the saturated network and eliminating timeouts.

Operational Efficiency: This solution minimizes operational overhead by removing dependency on the NAT instance and avoiding internet traffic, leading to more stable and secure S3 interactions.

VPC Endpoints for Amazon S3

Creating a Gateway Endpoint

A solutions architect is creating an application. The application will run on Amazon EC2 instances in private subnets across multiple Availability Zones in a VPC. The EC2 instances will frequently access large files that contain confidential information. These files are stored in Amazon S3 buckets for processing. The solutions architect must optimize the network architecture to minimize data transfer costs.

What should the solutions architect do to meet these requirements?

A.
Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the gateway endpoint
A.
Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the gateway endpoint
Answers
B.
Create a single NAT gateway in a public subnet. In the route tables for the private subnets, add a default route that points to the NAT gateway
B.
Create a single NAT gateway in a public subnet. In the route tables for the private subnets, add a default route that points to the NAT gateway
Answers
C.
Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the interface endpoint.
C.
Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add an entry for the interface endpoint.
Answers
D.
Create one NAT gateway for each Availability Zone in public subnets. In each of the route labels for the private subnets, add a default route that points lo the NAT gateway in the same Availability Zone
D.
Create one NAT gateway for each Availability Zone in public subnets. In each of the route labels for the private subnets, add a default route that points lo the NAT gateway in the same Availability Zone
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The application running on EC2 instances in private subnets needs frequent access to large confidential files stored in S3, minimizing data transfer costs.

Analysis of Options:

Gateway Endpoint for S3: Provides a secure, scalable, and cost-effective way for instances in private subnets to access S3 without using the internet or NAT gateways, thus minimizing data transfer costs.

Single NAT Gateway: Incurs additional costs for data transfer through the NAT gateway, which is not cost-effective.

PrivateLink Interface Endpoint for S3: Primarily used for accessing AWS services over a private connection but is more complex and costly compared to a gateway endpoint for S3.

Multiple NAT Gateways: Increases costs significantly and adds complexity without offering the cost benefits of a gateway endpoint.

Best Solution:

Gateway Endpoint for S3: This solution provides the required access with the least data transfer costs and minimal complexity.

VPC Endpoints for Amazon S3

Gateway Endpoints

A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start o* peak hours.

Which solution will meet these requirements?

A.
Configure an Application Load Balancer to distribute traffic properly to the Instances.
A.
Configure an Application Load Balancer to distribute traffic properly to the Instances.
Answers
B.
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization
B.
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization
Answers
C.
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
C.
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
Answers
D.
Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
D.
Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The application experiences slow performance at the start of peak hours, but normalizes after a few hours. The goal is to ensure proper performance at the beginning of peak hours.

Analysis of Options:

Application Load Balancer: Ensures proper traffic distribution but does not address the need to have sufficient instances running at the start of peak hours.

Dynamic Scaling Policy Based on Memory or CPU Utilization: While dynamic scaling reacts to usage metrics, it may not preemptively scale in anticipation of peak hours, leading to delays as new instances are launched and become available.

Scheduled Scaling Policy: This allows the Auto Scaling group to launch instances ahead of time, ensuring that enough instances are available and ready to handle the increased load right at the start of peak hours.

Best Solution:

Scheduled Scaling Policy: This approach ensures that new instances are launched and ready before peak hours begin, addressing the slow performance issue at the start of peak periods.

Scheduled Scaling for Amazon EC2 Auto Scaling

A company has a web application in the AWS Cloud and wants to collect transaction data in real time. The company wants to prevent data duplication and does not want to manage infrastructure. The company wants to perform additional processing on the data after the data is collected.

Which solution will meet these requirements?

A.
Configure an Amazon Simple Queue Service (Amazon SOS) FIFO queue. Configure an AWS Lambda function with an event source mapping for the FIFO queue to process the data.
A.
Configure an Amazon Simple Queue Service (Amazon SOS) FIFO queue. Configure an AWS Lambda function with an event source mapping for the FIFO queue to process the data.
Answers
B.
Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue Use an AWS Batch job to remove duplicate data from the queue Configure an AWS Lambda function to process the data.
B.
Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue Use an AWS Batch job to remove duplicate data from the queue Configure an AWS Lambda function to process the data.
Answers
C.
Use Amazon Kinesis Data Streams to send the Incoming transaction data to an AWS Batch job that removes duplicate data. Launch an Amazon EC2 instance that runs a custom script lo process the data.
C.
Use Amazon Kinesis Data Streams to send the Incoming transaction data to an AWS Batch job that removes duplicate data. Launch an Amazon EC2 instance that runs a custom script lo process the data.
Answers
D.
Set up an AWS Step Functions state machine to send incoming transaction data to an AWS Lambda function to remove duplicate data. Launch an Amazon EC2 instance that runs a custom script to process the data.
D.
Set up an AWS Step Functions state machine to send incoming transaction data to an AWS Lambda function to remove duplicate data. Launch an Amazon EC2 instance that runs a custom script to process the data.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to collect transaction data in real time, avoid data duplication, and perform additional processing without managing infrastructure.

Analysis of Options:

SQS FIFO Queue with Lambda: Ensures data is processed in order and prevents duplication. Lambda handles processing without the need to manage servers.

SQS FIFO Queue with AWS Batch: While this ensures no duplicates, it introduces additional complexity and management overhead with AWS Batch.

Kinesis Data Streams with AWS Batch and EC2: Involves more components and infrastructure management, which is against the requirement of not wanting to manage infrastructure.

Step Functions with Lambda and EC2: Involves setting up multiple services and still requires managing EC2 instances, increasing complexity.

Best Solution:

SQS FIFO Queue with Lambda: This combination ensures real-time data processing, prevents duplication, and minimizes infrastructure management, meeting all requirements efficiently.

Amazon SQS FIFO Queues

AWS Lambda and SQS Integration

Total 886 questions
Go to page: of 89