ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 87

Question list
Search
Search

List of questions

Search

Related questions











A company wants to publish a private website for its on-premises employees. The website consists of several HTML pages and image files. The website must be available only through HTTPS and must be available only to on-premises employees. A solutions architect plans to store the website files in an Amazon S3 bucket.

Which solution will meet these requirements?

A.

Create an S3 bucket policy to deny access when the source IP address is not the public IP address of the on-premises environment Set up an Amazon Route 53 alias record to point to the S3 bucket. Provide the alias record to the on-premises employees to grant the employees access to the website.

A.

Create an S3 bucket policy to deny access when the source IP address is not the public IP address of the on-premises environment Set up an Amazon Route 53 alias record to point to the S3 bucket. Provide the alias record to the on-premises employees to grant the employees access to the website.

Answers
B.

Create an S3 access point to provide website access. Attach an access point policy to deny access when the source IP address is not the public IP address of the on-premises environment. Provide the S3 access point alias to the on-premises employees to grant the employees access to the website.

B.

Create an S3 access point to provide website access. Attach an access point policy to deny access when the source IP address is not the public IP address of the on-premises environment. Provide the S3 access point alias to the on-premises employees to grant the employees access to the website.

Answers
C.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Use AWS Certificate Manager for SSL. Use AWS WAF with an IP set rule that allows access for the on-premises IP address. Set up an Amazon Route 53 alias record to point to the CloudFront distribution.

C.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Use AWS Certificate Manager for SSL. Use AWS WAF with an IP set rule that allows access for the on-premises IP address. Set up an Amazon Route 53 alias record to point to the CloudFront distribution.

Answers
D.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Create a CloudFront signed URL for the objects in the bucket. Set up an Amazon Route 53 alias record to point to the CloudFront distribution. Provide the signed URL to the on-premises employees to grant the employees access to the website.

D.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Create a CloudFront signed URL for the objects in the bucket. Set up an Amazon Route 53 alias record to point to the CloudFront distribution. Provide the signed URL to the on-premises employees to grant the employees access to the website.

Answers
Suggested answer: C

Explanation:

This solution uses CloudFront to serve the website securely over HTTPS using AWS Certificate Manager (ACM) for SSL certificates. Origin Access Control (OAC) ensures that only CloudFront can access the S3 bucket directly. AWS WAF with an IP set rule restricts access to the website, allowing only the on-premises IP address. Route 53 is used to create an alias record pointing to the CloudFront distribution. This setup ensures secure, private access to the website with low administrative overhead.

Option A and B: S3 bucket policies and access points do not provide HTTPS support, nor do they offer the same level of security as CloudFront with WAF.

Option D: Signed URLs are more suitable for temporary, expiring access rather than a permanent solution for on-premises employees.

AWS

Reference:

Amazon CloudFront with Origin Access Control

A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must automatically identify noncompliant resources and enforce compliance policies on findings.

Which solution will meet these requirements with the LEAST administrative overhead?

A.

Use an 1AM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

A.

Use an 1AM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

Answers
B.

Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.

B.

Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.

Answers
C.

Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

C.

Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Answers
D.

Use Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

D.

Use Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Answers
Suggested answer: A

Explanation:

The best solution to enforce encryption at rest for Amazon EBS volumes is to use an IAM policy to restrict the creation of unencrypted volumes. To automatically identify and remediate unencrypted volumes, you can use AWS Config rules, which continuously monitor the compliance of resources, and AWS Systems Manager to automate the remediation by encrypting existing unencrypted volumes. This setup requires minimal administrative overhead while ensuring compliance.

Option B (KMS): KMS is for managing encryption keys, but Config and Systems Manager provide a better solution for automatic detection and enforcement.

Option C (Macie): Macie is for data classification and is not suitable for this use case.

Option D (Inspector): Inspector is used for security vulnerabilities, not encryption compliance.

AWS

Reference:

AWS Config Rules

AWS Systems Manager

A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads.

Which solution will meet these requirements with the MOST operational efficiency?

A.

Create an Amazon DynamoDB database table configured with global tables.

A.

Create an Amazon DynamoDB database table configured with global tables.

Answers
B.

Create an Amazon RDS database with Multi-AZ deployments

B.

Create an Amazon RDS database with Multi-AZ deployments

Answers
C.

Create an Amazon RDS database with Multi-AZ DB cluster deployment.

C.

Create an Amazon RDS database with Multi-AZ DB cluster deployment.

Answers
D.

Create an Amazon RDS database configured with cross-Region read replicas.

D.

Create an Amazon RDS database configured with cross-Region read replicas.

Answers
Suggested answer: C

Explanation:

Amazon RDS Multi-AZ DB cluster deployment ensures high availability by automatically replicating data across multiple Availability Zones (AZs), and it supports failover in case of a failure in one AZ. This setup also provides increased capacity for read workloads by allowing read scaling with reader instances in different AZs. This solution offers the most operational efficiency with minimal manual intervention.

Option A (DynamoDB): DynamoDB is not suitable for a relational database workload, which requires a PostgreSQL engine.

Option B (RDS with Multi-AZ): While this provides high availability, it doesn't offer read scaling capabilities.

Option D (Cross-Region Read Replicas): This adds complexity and is not necessary if the requirement is high availability within a single region.

AWS

Reference:

Amazon RDS Multi-AZ DB Cluster

A manufacturing company runs an order processing application in its VPC. The company wants to securely send messages from the application to an external Salesforce system that uses Open Authorization (OAuth).

A solutions architect needs to integrate the company's order processing application with the external Salesforce system.

Which solution will meet these requirements?

A.

Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an HTTPS endpoint. Configure the order processing application to publish messages to the SNS topic.

A.

Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an HTTPS endpoint. Configure the order processing application to publish messages to the SNS topic.

Answers
B.

Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an Amazon Data Firehose delivery stream that has a HTTP destination. Configure the order processing application to publish messages to the SNS topic.

B.

Create an Amazon Simple Notification Service (Amazon SNS) topic in a fanout configuration that pushes data to an Amazon Data Firehose delivery stream that has a HTTP destination. Configure the order processing application to publish messages to the SNS topic.

Answers
C.

Create an Amazon EventBridge rule and configure an Amazon EventBridge API destination partner Configure the order processing application to publish messages to Amazon EventBridge.

C.

Create an Amazon EventBridge rule and configure an Amazon EventBridge API destination partner Configure the order processing application to publish messages to Amazon EventBridge.

Answers
D.

Create an Amazon Managed Streaming for Apache Kafka (Amazon MSK) topic that has an outbound MSK Connect connector. Configure the order processing application to publish messages to the MSK topic.

D.

Create an Amazon Managed Streaming for Apache Kafka (Amazon MSK) topic that has an outbound MSK Connect connector. Configure the order processing application to publish messages to the MSK topic.

Answers
Suggested answer: C

Explanation:

Amazon EventBridge API destinations allow you to send data from AWS to external systems, like Salesforce, using HTTP APIs, including those secured with OAuth. This provides a secure and scalable solution for sending messages from the order processing application to Salesforce.

Option A and B (SNS): SNS is not ideal for OAuth-secured external APIs and lacks the necessary OAuth integration.

Option D (MSK): Amazon MSK is a Kafka-based streaming solution, which is overkill for simple message forwarding to Salesforce.

AWS

Reference:

Amazon EventBridge API Destinations

A company uses an Amazon EC2 Auto Scaling group to host an API. The EC2 instances are in a target group that is associated with an Application Load Balancer (ALB). The company stores data in an Amazon Aurora PostgreSQL database.

The API has a weekly maintenance window. The company must ensure that the API returns a static maintenance response during the weekly maintenance window.

Which solution will meet this requirement with the LEAST operational overhead?

A.

Create a table in Aurora PostgreSQL that has fields to contain keys and values. Create a key for a maintenance flag. Set the flag when the maintenance window starts. Configure the API to query the table for the maintenance flag and to return a maintenance response if the flag is set. Reset the flag when the maintenance window is finished.

A.

Create a table in Aurora PostgreSQL that has fields to contain keys and values. Create a key for a maintenance flag. Set the flag when the maintenance window starts. Configure the API to query the table for the maintenance flag and to return a maintenance response if the flag is set. Reset the flag when the maintenance window is finished.

Answers
B.

Create an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the EC2 instances to the queue. Publish a message to the queue when the maintenance window starts. Configure the API to return a maintenance message if the instances receive a maintenance start message from the queue. Publish another message to the queue when the maintenance window is finished to restore normal operation.

B.

Create an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the EC2 instances to the queue. Publish a message to the queue when the maintenance window starts. Configure the API to return a maintenance message if the instances receive a maintenance start message from the queue. Publish another message to the queue when the maintenance window is finished to restore normal operation.

Answers
C.

Create a listener rule on the ALB to return a maintenance response when the path on a request matches a wildcard. Set the rule priority to one. Perform the maintenance. When the maintenance window is finished, delete the listener rule.

C.

Create a listener rule on the ALB to return a maintenance response when the path on a request matches a wildcard. Set the rule priority to one. Perform the maintenance. When the maintenance window is finished, delete the listener rule.

Answers
D.

Create an Amazon Simple Notification Service (Amazon SNS) topic Subscribe the EC2 instances to the topic Publish a message to the topic when the maintenance window starts. Configure the API to return a maintenance response if the instances receive the maintenance start message from the topic. Publish another message to the topic when the maintenance window finshes to restore normal operation.

D.

Create an Amazon Simple Notification Service (Amazon SNS) topic Subscribe the EC2 instances to the topic Publish a message to the topic when the maintenance window starts. Configure the API to return a maintenance response if the instances receive the maintenance start message from the topic. Publish another message to the topic when the maintenance window finshes to restore normal operation.

Answers
Suggested answer: C

Explanation:

Creating a listener rule on the Application Load Balancer (ALB) to return a maintenance response during the maintenance window is the most straightforward solution with the least operational overhead. The rule can be configured to match all incoming requests and return a custom response, and it can be easily removed once maintenance is complete.

Option A (Aurora table flag): This adds unnecessary complexity for a temporary maintenance response.

Option B and D (SQS or SNS): These options introduce more components than needed for a simple maintenance message.

AWS

Reference:

ALB Listener Rules

An online education platform experiences lag and buffering during peak usage hours, when thousands of students access video lessons concurrently. A solutions architect needs to improve the performance of the education platform.

The platform needs to handle unpredictable traffic surges without losing responsiveness. The platform must provide smooth video playback performance at all times. The platform must create multiple copies of each video lesson and store the copies in various bitrates to serve users who have different internet speeds. The smallest video size is 7 GB.

Which solution will meet these requirements MOST cost-effectively?

A.

Use Amazon ElastiCache to cache videos in all the required bitrates. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

A.

Use Amazon ElastiCache to cache videos in all the required bitrates. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

Answers
B.

Create an Auto Scaling group that includes Amazon EC2 instances that are sized to meet peak loads. Use the Auto Scaling group to serve videos. Use the Auto Scaling group to convert the videos to the required bitrates.

B.

Create an Auto Scaling group that includes Amazon EC2 instances that are sized to meet peak loads. Use the Auto Scaling group to serve videos. Use the Auto Scaling group to convert the videos to the required bitrates.

Answers
C.

Store a copy of every video in every required bitrate in an Amazon S3 bucket. Use a single Amazon EC2 instance to serve the videos.

C.

Store a copy of every video in every required bitrate in an Amazon S3 bucket. Use a single Amazon EC2 instance to serve the videos.

Answers
D.

Use Amazon Kinesis Video Streams to store and serve the videos. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

D.

Use Amazon Kinesis Video Streams to store and serve the videos. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

Answers
Suggested answer: C

Explanation:

The most cost-effective solution for serving video content with different bitrates is to store multiple versions of each video in Amazon S3. S3 provides scalable and cost-effective storage for large media files. Serving the videos from a single Amazon EC2 instance ensures low-latency delivery, and S3 storage helps minimize costs.

Option A (ElastiCache): Caching large video files in memory would be prohibitively expensive and unnecessary.

Option B (Auto Scaling group): Using Auto Scaling groups to serve video is less cost-effective compared to leveraging S3 for static storage.

Option D (Kinesis Video Streams): Kinesis Video Streams is designed for real-time video streaming and is not suitable for storing and serving pre-recorded videos.

AWS

Reference:

Amazon S3 for Media Storage

A company has Amazon EC2 instances in multiple AWS Regions. The instances all store and retrieve confidential data from the same Amazon S3 bucket. The company wants to improve the security of its current architecture.

The company wants to ensure that only the Amazon EC2 instances within its VPC can access the S3 bucket. The company must block all other access to the bucket.

Which solution will meet this requirement?

A.

Use 1AM policies to restrict access to the S3 bucket.

A.

Use 1AM policies to restrict access to the S3 bucket.

Answers
B.

Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.

B.

Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.

Answers
C.

Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.

C.

Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.

Answers
D.

Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.

D.

Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.

Answers
Suggested answer: C

Explanation:

Creating a VPC endpoint for S3 and configuring a bucket policy to allow access only from the endpoint ensures that only EC2 instances within the VPC can access the S3 bucket. This solution improves security by restricting access at the network level without the need for public internet access.

Option A (IAM policies): IAM policies alone cannot restrict access based on the network location.

Option B and D (Encryption): Encryption secures data at rest but does not restrict network access to the bucket.

AWS

Reference:

Amazon S3 VPC Endpoints

A company recently launched a new product that is highly available in one AWS Region The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), a public Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.

Which combination of steps will meet these requirements? (Select THREE.)

A.

In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.

A.

In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.

Answers
B.

Create an Amazon Route 53 failover record.

B.

Create an Amazon Route 53 failover record.

Answers
C.

Modify the DynamoDB table to create a DynamoDB global table.

C.

Modify the DynamoDB table to create a DynamoDB global table.

Answers
D.

In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.

D.

In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.

Answers
E.

Modify the DynamoDB table to create global secondary indexes (GSIs).

E.

Modify the DynamoDB table to create global secondary indexes (GSIs).

Answers
F.

Create an AWS PrivateLink endpoint for the application.

F.

Create an AWS PrivateLink endpoint for the application.

Answers
Suggested answer: A, B, C

Explanation:

To make the application highly available across regions:

Deploy the application in a different region using a new ECS cluster and ALB to ensure regional redundancy.

Use Route 53 failover routing to automatically direct traffic to the healthy region in case of failure.

Use DynamoDB Global Tables to ensure the database is replicated and available across multiple regions, supporting read and write operations in each region.

Option D (EKS cluster in the same region): This does not provide regional redundancy.

Option E (Global Secondary Indexes): GSIs improve query performance but do not provide multi-region availability.

Option F (PrivateLink): PrivateLink is for secure communication, not for cross-region high availability.

AWS

Reference:

DynamoDB Global Tables

Amazon ECS with ALB

A company wants to restrict access to the content of its web application. The company needs to protect the content by using authorization techniques that are available on AWS. The company also wants to implement a serverless architecture for authorization and authentication that has low login latency.

The solution must integrate with the web application and serve web content globally. The application currently has a small user base, but the company expects the application's user base to increase

Which solution will meet these requirements?

A.

Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization. Configure Amazon CloudFront to serve the web application globally

A.

Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization. Configure Amazon CloudFront to serve the web application globally

Answers
B.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

B.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

Answers
C.

Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization Use Amazon S3 Transfer Acceleration to serve the web application globally.

C.

Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization Use Amazon S3 Transfer Acceleration to serve the web application globally.

Answers
D.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

D.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

Answers
Suggested answer: A

Explanation:

Amazon Cognito provides scalable, serverless authentication, and Lambda@Edge is used for authorization, providing low-latency access control at the edge. Amazon CloudFront serves the web application globally with reduced latency and ensures secure access for users around the world. This solution minimizes operational overhead while providing scalability and security.

Option B (Directory Service): Directory Service is more suitable for enterprise use cases involving Active Directory, not for web-based applications.

Option C (S3 Transfer Acceleration): S3 Transfer Acceleration helps with file transfers but does not provide authorization features.

Option D (Elastic Beanstalk): Elastic Beanstalk adds unnecessary overhead when CloudFront can handle global delivery efficiently.

AWS

Reference:

Amazon Cognito

Lambda@Edge

A company runs a payment processing system in the AWS Cloud Sometimes when a payment fails because of insufficient funds or technical issues, users attempt to resubmit the payment. Sometimes payment resubmissions invoke multiple payment messages for the same payment ID.

A solutions architect needs to ensure that the payment processing system receives payment messages that have the same payment ID sequentially, according to when the messages were generated. The processing system must process the messages in the order in which the messages are received. The solution must retain all payment messages for 10 days for analytics.

Which solutions will meet these requirements? (Select TWO.)

A.

Write the payment messages to an Amazon DynamoDB table that uses the payment ID as the partition key.

A.

Write the payment messages to an Amazon DynamoDB table that uses the payment ID as the partition key.

Answers
B.

Write the payment messages to an Amazon Kinesis data stream that uses the payment ID as the partition key.

B.

Write the payment messages to an Amazon Kinesis data stream that uses the payment ID as the partition key.

Answers
C.

Write the payment messages to an Amazon ElastiCache for Memcached cluster that uses the payment ID as the key

C.

Write the payment messages to an Amazon ElastiCache for Memcached cluster that uses the payment ID as the key

Answers
D.

Write the payment messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.

D.

Write the payment messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.

Answers
E.

Write the payment messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue Set the message group to use the payment ID.

E.

Write the payment messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue Set the message group to use the payment ID.

Answers
Suggested answer: B, E

Explanation:

Both Amazon Kinesis and SQS FIFO queues ensure the sequential processing of messages. By using the payment ID as the partition key in Kinesis or as the message group in the SQS FIFO queue, messages are processed in order. Both solutions also allow for long-term retention (up to 10 days) of messages, making them suitable for this payment processing use case.

Option A (DynamoDB): DynamoDB does not guarantee message ordering for real-time processing.

Option C (ElastiCache): ElastiCache is for caching, not suitable for sequential message processing.

Option D (Standard SQS queue): A standard SQS queue does not guarantee ordering of messages.

AWS

Reference:

Amazon Kinesis

Amazon SQS FIFO Queues

Total 886 questions
Go to page: of 89