ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 53

Question list
Search
Search

List of questions

Search

Related questions











A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Install an external image management library on an EC2 instance. Use the image management library to process the images.
A.
Install an external image management library on an EC2 instance. Use the image management library to process the images.
Answers
B.
Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
B.
Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
Answers
C.
Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
C.
Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
Answers
D.
Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
D.
Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
Answers
Suggested answer: C

Explanation:

To resize images dynamically and serve appropriate formats to clients, a Lambda@Edge function with an external image management library can be used. Lambda@Edge allows running custom code at the edge locations of CloudFront, which can process the images on the fly and optimize them for different devices and browsers. An external image management library can provide various image manipulation and optimization features.

Reference:

Lambda@Edge

Resizing Images with Amazon CloudFront & Lambda@Edge

A company hosts an application on Amazon EC2 instances that run in a single Availability Zone. The application is accessible by using the transport layer of the Open Systems Interconnection (OSI) model. The company needs the application architecture to have high availability Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO_)

A.
Configure new EC2 instances in a different AvailabiIity Zone. Use Amazon Route 53 to route traffic to all instances.
A.
Configure new EC2 instances in a different AvailabiIity Zone. Use Amazon Route 53 to route traffic to all instances.
Answers
B.
Configure a Network Load Balancer in front of the EC2 instances.
B.
Configure a Network Load Balancer in front of the EC2 instances.
Answers
C.
Configure a Network Load Balancer tor TCP traffic to the instances. Configure an Application Load Balancer tor HTTP and HTTPS traffic to the instances.
C.
Configure a Network Load Balancer tor TCP traffic to the instances. Configure an Application Load Balancer tor HTTP and HTTPS traffic to the instances.
Answers
D.
Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple Availability Zones. Configure the Auto Scaling group to run application health checks on the instances_
D.
Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple Availability Zones. Configure the Auto Scaling group to run application health checks on the instances_
Answers
E.
Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to a stopped state
E.
Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to a stopped state
Answers
Suggested answer: A, D

Explanation:

To achieve high availability for an application that runs on EC2 instances, the application should be deployed across multiple Availability Zones and use a load balancer to distribute traffic. An Auto Scaling group can be used to launch and manage EC2 instances in multiple Availability Zones and perform health checks on them. A Network Load Balancer can be used to handle transport layer traffic to the EC2 instances.

Reference:

Auto Scaling Groups

What Is a Network Load Balancer?

A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data.

Which solutions will meet these requirements? (Choose two.)

A.
Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
A.
Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Answers
B.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
B.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Answers
C.
Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
C.
Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Answers
D.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon Kinesis Data Analytics to transform the data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
D.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon Kinesis Data Analytics to transform the data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
Answers
E.
Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
E.
Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
Answers
Suggested answer: A, B

Explanation:

To ingest, transform, and query real-time streaming data from multiple sources, Amazon Kinesis and Amazon MSK are suitable solutions. Amazon Kinesis Data Streams can stream the data from various sources and integrate with other AWS services. Amazon Kinesis Data Analytics can transform the data using SQL or Apache Flink. Amazon Kinesis Data Firehose can write the data to Amazon S3 or other destinations. Amazon Athena can query the transformed data from Amazon S3 using standard SQL. Amazon MSK can stream the data using Apache Kafka, which is a popular open-source platform for streaming data. AWS Glue can transform the data using Apache Spark or Python scripts and write the data to Amazon S3 or other destinations. Amazon Athena can also query the transformed data from Amazon S3 using standard SQL.

Reference:

What Is Amazon Kinesis Data Streams?

What Is Amazon Kinesis Data Analytics?

What Is Amazon Kinesis Data Firehose?

What Is Amazon Athena?

What Is Amazon MSK?

What Is AWS Glue?

A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.

What is the MOST cost-effective solution to connect these VPCs?

A.
Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
A.
Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
Answers
B.
Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
B.
Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
Answers
C.
Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
C.
Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
Answers
D.
Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
D.
Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
Answers
Suggested answer: C

Explanation:

To connect two VPCs in the same Region within the same AWS account, VPC peering is the most costeffective solution. VPC peering allows direct network traffic between the VPCs without requiring a gateway, VPN connection, or AWS Transit Gateway. VPC peering also does not incur any additional charges for data transfer between the VPCs.

Reference:

What Is VPC Peering?

VPC Peering Pricing

A company wants to use an event-driven programming model with AWS Lambd a. The company wants to reduce startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up.

Which solution will meet these requirements MOST cost-effectively?

A.
Configure Lambda provisioned concurrency.
A.
Configure Lambda provisioned concurrency.
Answers
B.
Increase the timeout of the Lambda functions.
B.
Increase the timeout of the Lambda functions.
Answers
C.
Increase the memory of the Lambda functions.
C.
Increase the memory of the Lambda functions.
Answers
D.
Configure Lambda SnapStart.
D.
Configure Lambda SnapStart.
Answers
Suggested answer: D

Explanation:

To reduce startup latency for Lambda functions that run on Java 11, Lambda SnapStart is a suitable solution. Lambda SnapStart is a feature that enables faster cold starts and lower outlier latencies for Java 11 functions. Lambda SnapStart uses a pre-initialized Java Virtual Machine (JVM) to run the functions, which reduces the initialization time and memory footprint. Lambda SnapStart does not incur any additional charges.

Reference:

Lambda SnapStart for Java 11 Functions Lambda SnapStart FAQs

A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy that maximizes security without increasing operational overhead.

What should the solutions architect do to meet these requirements?

A.
Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
A.
Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
Answers
B.
Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
B.
Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
Answers
C.
Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway.
C.
Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway.
Answers
D.
Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
D.
Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
Answers
Suggested answer: B

Explanation:

To allow the MySQL database in the private subnets to access the internet without exposing it to the public, a NAT gateway is a suitable solution. A NAT gateway enables instances in a private subnet to connect to the internet or other AWS services, but prevents the internet from initiating a connection with those instances. A NAT gateway resides in the public subnets and can handle high throughput of traffic with low latency. A NAT gateway is also a managed service that does not require any operational overhead.

Reference:

NAT Gateways

NAT Gateway Pricing

A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a building. The sensors will feed a database through HTTP requests that will add up usage for each tenant. The solutions architect must use managed services when possible. The workload will receive more features in the future as the solutions architect adds independent components.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
A.
Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
Answers
B.
Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
B.
Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
Answers
C.
Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
C.
Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
Answers
D.
Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
D.
Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
Answers
Suggested answer: A

Explanation:

To use an event-driven programming model with AWS Lambda and reduce operational overhead, Amazon API Gateway and Amazon DynamoDB are suitable solutions. Amazon API Gateway can receive the data from the sensors and invoke AWS Lambda functions to process the data. AWS Lambda can run code without provisioning or managing servers, and scale automatically with the incoming requests. Amazon DynamoDB can store the data in a fast and flexible NoSQL database that can handle any amount of data with consistent performance.

Reference:

What Is Amazon API Gateway?

What Is AWS Lambda?

What Is Amazon DynamoDB?

A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are frequently accessed during the first week after production and must be stored for several years. The reports must be retrievable within 6 hours.

Which solution meets these requirements MOST cost-effectively?

A.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
A.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
Answers
B.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
B.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
Answers
C.
Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
C.
Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
Answers
D.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
D.
Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Answers
Suggested answer: A

Explanation:

To store and retrieve reports that are frequently accessed during the first week and must be stored for several years, S3 Standard and S3 Glacier are suitable solutions. S3 Standard offers high durability, availability, and performance for frequently accessed data. S3 Glacier offers secure and durable storage for long-term data archiving at a low cost. S3 Lifecycle rules can be used to transition the reports from S3 Standard to S3 Glacier after 7 days, which can reduce storage costs. S3 Glacier also supports retrieval within 6 hours.

Reference:

Storage Classes

Object Lifecycle Management

Retrieving Archived Objects from Amazon S3 Glacier

A company has data collection sensors at different locations. The data collection sensors stream a high volume of data to the company. The company wants to design a platform on AWS to ingest and process high-volume streaming dat a. The solution must be scalable and support data collection in near real time. The company must store the data in Amazon S3 for future reporting.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
A.
Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
Answers
B.
Use AWS Glue to deliver streaming data to Amazon S3.
B.
Use AWS Glue to deliver streaming data to Amazon S3.
Answers
C.
Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
C.
Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
Answers
D.
Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
D.
Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Answers
Suggested answer: A

Explanation:

To ingest and process high-volume streaming data with the least operational overhead, Amazon Kinesis Data Firehose is a suitable solution. Amazon Kinesis Data Firehose can capture, transform, and deliver streaming data to Amazon S3 or other destinations. Amazon Kinesis Data Firehose can scale automatically to match the throughput of the data and handle any amount of data. Amazon Kinesis Data Firehose is also a fully managed service that does not require any servers to provision or manage.

Reference:

What Is Amazon Kinesis Data Firehose?

Amazon Kinesis Data Firehose Pricing

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.

The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of dat a. Which combination of storage and caching should the solutions architect use?

A.
Amazon S3 with Amazon CloudFront
A.
Amazon S3 with Amazon CloudFront
Answers
B.
Amazon S3 Glacier with Amazon ElastiCache
B.
Amazon S3 Glacier with Amazon ElastiCache
Answers
C.
Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
C.
Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
Answers
D.
AWS Storage Gateway with Amazon ElastiCache
D.
AWS Storage Gateway with Amazon ElastiCache
Answers
Suggested answer: A

Explanation:

To store and view engineering drawings with caching support, Amazon S3 and Amazon CloudFront are suitable solutions. Amazon S3 can store any amount of data with high durability, availability, and performance. Amazon CloudFront can distribute the engineering drawings to edge locations closer to the users, which can reduce the latency and improve the user experience. Amazon CloudFront can also cache the engineering drawings at the edge locations, which can minimize the amount of time that users wait for the drawings to load.

Reference:

What Is Amazon S3?

What Is Amazon CloudFront?

Total 886 questions
Go to page: of 89