ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 75

Question list
Search
Search

List of questions

Search

Related questions











A company uses AWS to host its public ecommerce website. The website uses an AWS Global Accelerator accelerator for traffic from the internet. Tt\e Global Accelerator accelerator forwards the traffic to an Application Load Balancer (ALB) that is the entry point for an Auto Scaling group.

The company recently identified a ODoS attack on the website. The company needs a solution to mitigate future attacks.

Which solution will meet these requirements with the LEAST implementation effort?

A.
Configure an AWS WAF web ACL for the Global Accelerator accelerator to block traffic by using rate-based rules.
A.
Configure an AWS WAF web ACL for the Global Accelerator accelerator to block traffic by using rate-based rules.
Answers
B.
Configure an AWS Lambda function to read the ALB metrics to block attacks by updating a VPC network ACL.
B.
Configure an AWS Lambda function to read the ALB metrics to block attacks by updating a VPC network ACL.
Answers
C.
Configure an AWS WAF web ACL on the ALB to block traffic by using rate-based rules.
C.
Configure an AWS WAF web ACL on the ALB to block traffic by using rate-based rules.
Answers
D.
Configure an Ama7on CloudFront distribution in front of the Global Accelerator accelerator
D.
Configure an Ama7on CloudFront distribution in front of the Global Accelerator accelerator
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to mitigate DDoS attacks on its website, which uses AWS Global Accelerator to route traffic to an Application Load Balancer (ALB).

Analysis of Options:

AWS WAF on Global Accelerator: Allows for centralized protection and can block traffic based on rate-based rules, effectively mitigating DDoS attacks with minimal implementation effort.

Lambda Function and VPC Network ACL: Requires custom implementation and ongoing management, increasing complexity and effort.

AWS WAF on ALB: Provides protection but involves additional configuration and management at the ALB level.

CloudFront Distribution in front of Global Accelerator: Adds unnecessary complexity and changes the current traffic flow setup.

Best Solution:

AWS WAF on Global Accelerator: This provides the required protection with the least implementation effort, ensuring effective DDoS mitigation and maintaining the existing architecture.

AWS WAF

Using AWS WAF with AWS Global Accelerator

A company runs an application on Amazon EC2 Instances in a private subnet. The application needs to store and retrieve data in Amazon S3 buckets. According to regulatory requirements, the data must not travel across the public internet.

What should a solutions architect do to meet these requirements MOST cost-effectively?

A.
Deploy a NAT gateway to access the S3 buckets.
A.
Deploy a NAT gateway to access the S3 buckets.
Answers
B.
Deploy AWS Storage Gateway to access the S3 buckets.
B.
Deploy AWS Storage Gateway to access the S3 buckets.
Answers
C.
Deploy an S3 interface endpoint to access the S3 buckets.
C.
Deploy an S3 interface endpoint to access the S3 buckets.
Answers
D.
Deploy an S3 gateway endpoint to access the S3 buckets.
D.
Deploy an S3 gateway endpoint to access the S3 buckets.
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The application running in a private subnet needs to store and retrieve data from S3 without data traveling over the public internet.

Analysis of Options:

NAT Gateway: Allows private subnets to access the internet but incurs additional costs and still routes traffic through the public internet.

AWS Storage Gateway: Provides hybrid cloud storage solutions but is not the most cost-effective for direct S3 access from within the VPC.

S3 Interface Endpoint: Provides private access to S3 but is generally used for specific use cases where more granular control is required, which might be overkill and more expensive.

S3 Gateway Endpoint: Provides private, cost-effective access to S3 from within the VPC without routing traffic through the public internet.

Best Solution:

S3 Gateway Endpoint: This option meets the requirements for secure, private access to S3 from a private subnet most cost-effectively.

Amazon VPC Endpoints

Gateway Endpoints

A development team uses multiple AWS accounts for its development, staging, and production environments. Team members have been launching large Amazon EC2 instances that are underutilized. A solutions architect must prevent large instances from being launched in all accounts.

How can the solutions architect meet this requirement with the LEAST operational overhead?

A.
Update the 1AM policies to deny the launch of large EC2 instances. Apply the policies to all users.
A.
Update the 1AM policies to deny the launch of large EC2 instances. Apply the policies to all users.
Answers
B.
Define a resource in AWS Resource Access Manager that prevents the launch of large EC2 instances.
B.
Define a resource in AWS Resource Access Manager that prevents the launch of large EC2 instances.
Answers
C.
Create an (AM role in each account that denies the launch of large EC2 instances. Grant the developers 1AM group access to the role.
C.
Create an (AM role in each account that denies the launch of large EC2 instances. Grant the developers 1AM group access to the role.
Answers
D.
Create an organization in AWS Organizations in the management account with the default policy. Create a service control policy (SCP) that denies the launch of large EC2 Instances, and apply it to the AWS accounts.
D.
Create an organization in AWS Organizations in the management account with the default policy. Create a service control policy (SCP) that denies the launch of large EC2 Instances, and apply it to the AWS accounts.
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The development team needs to prevent the launch of large EC2 instances across multiple AWS accounts used for development, staging, and production environments.

Analysis of Options:

IAM Policies: Would need to be applied individually to each user in every account, leading to significant operational overhead.

AWS Resource Access Manager: Used for sharing resources, not for enforcing restrictions on resource creation.

IAM Role in Each Account: Requires creating and managing roles in each account, leading to higher operational overhead compared to using a centralized approach.

Service Control Policy (SCP) with AWS Organizations: Provides a centralized way to enforce policies across multiple AWS accounts, ensuring that large EC2 instances cannot be launched in any account.

Best Solution:

Service Control Policy (SCP) with AWS Organizations: This solution offers the least operational overhead by allowing centralized management and enforcement of policies across all accounts, effectively preventing the launch of large EC2 instances.

AWS Organizations and SCPs

A company is developing an application to support customer demands. The company wants to deploy the application on multiple Amazon EC2 Nitro-based instances within the same Availability Zone. The company also wants to give the application the ability to write to multiple block storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application availability.

Which solution will meet these requirements?

A.
Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
A.
Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
Answers
B.
Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
B.
Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
Answers
C.
Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
C.
Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
Answers
D.
Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon E8S) Multi-Attach.
D.
Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon E8S) Multi-Attach.
Answers
Suggested answer: C

Explanation:

Understanding the Requirement: The application needs to write to multiple block storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher availability.

Analysis of Options:

General Purpose SSD (gp3) with Multi-Attach: Supports Multi-Attach but does not provide the highest performance required for critical applications.

Throughput Optimized HDD (st1) with Multi-Attach: Not suitable for applications requiring high performance and low latency.

Provisioned IOPS SSD (io2) with Multi-Attach: Provides high performance and durability, suitable for applications requiring simultaneous writes and high availability.

General Purpose SSD (gp2) with Multi-Attach: Similar to gp3 but with less flexibility and performance.

Best Solution:

Provisioned IOPS SSD (io2) with Multi-Attach: This solution ensures the highest performance and availability for the application by allowing multiple EC2 instances to attach to and write to the same EBS volume simultaneously.

Amazon EBS Multi-Attach

Provisioned IOPS SSD (io2)

An online photo-sharing company stores Hs photos in an Amazon S3 bucket that exists in the us-west-1 Region. The company needs to store a copy of all new photos in the us-east-1 Region.

Which solution will meet this requirement with the LEAST operational effort?

A.
Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the existing S3 bucket to the second S3 bucket.
A.
Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the existing S3 bucket to the second S3 bucket.
Answers
B.
Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOngm element.
B.
Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOngm element.
Answers
C.
Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule to save photos into the second S3 bucket,
C.
Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule to save photos into the second S3 bucket,
Answers
D.
Create a second S3 bucket In us-east-1. Configure S3 event notifications on object creation and update events to Invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
D.
Create a second S3 bucket In us-east-1. Configure S3 event notifications on object creation and update events to Invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to store a copy of all new photos in the us-east-1 Region from an S3 bucket in the us-west-1 Region.

Analysis of Options:

Cross-Region Replication: Automatically replicates objects across regions with minimal operational effort once configured.

CORS Configuration: Used for allowing resources on a web page to be requested from another domain, not for replication.

S3 Lifecycle Rule: Manages the transition of objects between storage classes within the same bucket, not for cross-region replication.

S3 Event Notifications with Lambda: Requires additional configuration and management compared to Cross-Region Replication.

Best Solution:

S3 Cross-Region Replication: This solution provides an automated and efficient way to replicate objects to another region, meeting the requirement with the least operational effort.

Amazon S3 Cross-Region Replication

A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for frequent analysis, retained tor an additional 60 days tor backup purposes, and deleted 90 days after creation.

Which solution will meet these requirements MOST cost-effectively?

A.
Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
A.
Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Answers
B.
Transition objects lo the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
B.
Transition objects lo the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Answers
C.
Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects alter 90 days.
C.
Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects alter 90 days.
Answers
D.
Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
D.
Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The company needs logs to be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup, and deleted after 90 days.

Analysis of Options:

Transition to S3 Standard after 30 days: Keeps logs in the same high-availability storage, not cost-effective.

Transition to S3 Standard-IA, then Glacier Flexible Retrieval after 90 days: Adds unnecessary cost and complexity since objects need to be accessible for only 30 days and then retained for 60 days.

Transition to Glacier Flexible Retrieval after 30 days: Not suitable for frequent access required in the first 30 days.

Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: Provides cost-effective storage for infrequently accessed logs after the initial 30-day period, then moves to the cheapest long-term storage before deletion.

Best Solution:

Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: This solution meets the requirements for high availability, cost-effective storage for backup, and scheduled deletion with the least cost.

Amazon S3 Storage Classes

Managing your storage lifecycle

A company runs an application in a VPC with public and private subnets. The VPC extends across multiple Availability Zones. The application runs on Amazon EC2 instances in private subnets. The application uses an Amazon Simple Queue Service (Amazon SOS) queue.

A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and the SOS queue

Which solution will meet these requirements?

A.
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the private subnets. Add to the endpoint a security group that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets.
A.
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the private subnets. Add to the endpoint a security group that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets.
Answers
B.
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the public subnets. Attach to the interface endpoint a VPC endpoint policy that allows access from the EC2 Instances that are in the private subnets.
B.
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the public subnets. Attach to the interface endpoint a VPC endpoint policy that allows access from the EC2 Instances that are in the private subnets.
Answers
C.
Implement an interface VPC endpoint for Ama7on SOS. Configure the endpoint to use the public subnets Attach an Amazon SOS access policy to the interface VPC endpoint that allows requests from only a specified VPC endpoint.
C.
Implement an interface VPC endpoint for Ama7on SOS. Configure the endpoint to use the public subnets Attach an Amazon SOS access policy to the interface VPC endpoint that allows requests from only a specified VPC endpoint.
Answers
D.
Implement a gateway endpoint tor Amazon SOS. Add a NAT gateway to the private subnets. Attach an 1AM role to the EC2 Instances that allows access to the SOS queue.
D.
Implement a gateway endpoint tor Amazon SOS. Add a NAT gateway to the private subnets. Attach an 1AM role to the EC2 Instances that allows access to the SOS queue.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The application running on EC2 instances in private subnets needs to securely connect to an Amazon SQS queue without exposing traffic to the public internet.

Analysis of Options:

Interface VPC Endpoint in Private Subnets: Allows private, secure connectivity to SQS without using the public internet. Configuring security groups ensures controlled access from EC2 instances.

Interface VPC Endpoint in Public Subnets: Not necessary for private EC2 instances and exposes additional security risks.

Gateway Endpoint: Gateway endpoints are not supported for SQS; they are used for services like S3 and DynamoDB.

NAT Gateway with IAM Role: Increases costs and complexity compared to using an interface VPC endpoint directly.

Best Solution:

Interface VPC Endpoint in Private Subnets: This option ensures secure, private connectivity to SQS, meeting the requirement with minimal complexity and optimal security.

VPC Endpoints

Amazon SQS and VPC Endpoints

A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be transmitted over the public internet. Servers in the company's on-premises data center will consume the output from an application that runs on the LC2 instances.

Which solution will meet these requirements?

A.
Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the company and the VPC.
A.
Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the company and the VPC.
Answers
B.
Deploys gateway VPC endpoint for Amazon S3 Set up an AWS Direct Connect connection between the on-premises network and the VPC.
B.
Deploys gateway VPC endpoint for Amazon S3 Set up an AWS Direct Connect connection between the on-premises network and the VPC.
Answers
C.
Set up on AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN connection between the company and the VPC.
C.
Set up on AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN connection between the company and the VPC.
Answers
D.
Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances lo fetch S3 data and feed the application instances.
D.
Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances lo fetch S3 data and feed the application instances.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: EC2 instances need to upload data to S3 without using the public internet, and on-premises servers need to consume this data.

Analysis of Options:

Interface VPC Endpoint for EC2: Not relevant for accessing S3.

Gateway VPC Endpoint for S3 and Direct Connect: Provides private connectivity from EC2 instances to S3 and from on-premises to AWS, ensuring compliance with the requirement to avoid public internet.

Transit Gateway and Site-to-Site VPN: Adds unnecessary complexity and does not provide the same level of performance as Direct Connect.

Proxy EC2 Instances with NAT Gateways: Increases complexity and costs compared to a direct connection using VPC endpoints and Direct Connect.

Best Solution:

Gateway VPC Endpoint for S3 and Direct Connect: This solution ensures secure, private data transfer both within AWS and between on-premises and AWS, meeting the compliance requirements effectively.

Amazon VPC Endpoints for S3

AWS Direct Connect

A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications that run on Amazon EC2 instances access the file shares The company needs a storage disaster recovery (OR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be accessed by using the same protocols as the primary Region.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an AWS Lambda function lo copy the data to an Amazon S3 bucket. Replicate the S3 bucket (o the secondary Region.
A.
Create an AWS Lambda function lo copy the data to an Amazon S3 bucket. Replicate the S3 bucket (o the secondary Region.
Answers
B.
Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary Region. Create a new FSx for ONTAP instance from the backup.
B.
Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary Region. Create a new FSx for ONTAP instance from the backup.
Answers
C.
Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from the primary Region to the secondary Region.
C.
Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from the primary Region to the secondary Region.
Answers
D.
Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicate the volume to the secondary Region.
D.
Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicate the volume to the secondary Region.
Answers
Suggested answer: C

Explanation:

Understanding the Requirement: The company needs a disaster recovery solution for FSx for NetApp ONTAP in a secondary region, accessible using the same protocols (CIFS and NFS).

Analysis of Options:

Lambda Function and S3: Involves copying data to S3, which changes the access protocols and increases operational overhead.

AWS Backup: Suitable for backup and restore but not for real-time or near-real-time replication for disaster recovery.

FSx for ONTAP with SnapMirror: SnapMirror provides efficient replication between ONTAP instances, maintaining access protocols and requiring minimal operational overhead.

Amazon EFS: Does not support CIFS and requires migrating data, increasing complexity and changing access protocols.

Best Solution:

FSx for ONTAP with SnapMirror: This solution ensures seamless disaster recovery with the same access protocols and minimal operational overhead.

Amazon FSx for NetApp ONTAP

NetApp SnapMirror

A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load Balancer in a VPC. An Amazon RDS for MySQL DB instance contains the data The company needs the ability to automatically detect and respond to suspicious or unexpected behavior in its AWS environment. The company already has added AWS WAF to its architecture.

What should a solutions architect do next to protect against threats?

A.
Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for GuardDuty findings and to Invoke an AWS Lambda function to adjust the AWS WAF rules.
A.
Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for GuardDuty findings and to Invoke an AWS Lambda function to adjust the AWS WAF rules.
Answers
B.
Use AWS Firewall Manager to perform threat detection. Configure Amazon EventBridge to filter for Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web ACL
B.
Use AWS Firewall Manager to perform threat detection. Configure Amazon EventBridge to filter for Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web ACL
Answers
C.
Use Amazon Inspector to perform threat detection and lo update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
C.
Use Amazon Inspector to perform threat detection and lo update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
Answers
D.
Use Amazon Macie to perform threat detection and to update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
D.
Use Amazon Macie to perform threat detection and to update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to automatically detect and respond to suspicious or unexpected behavior in its AWS environment, beyond the existing AWS WAF setup.

Analysis of Options:

Amazon GuardDuty: Provides continuous monitoring and threat detection across AWS accounts and resources, including integration with AWS WAF for automated response.

AWS Firewall Manager: Manages firewall rules across multiple accounts but is more focused on central management than threat detection.

Amazon Inspector: Focuses on security assessments and vulnerability management rather than real-time threat detection.

Amazon Macie: Primarily used for data security and privacy, not comprehensive threat detection.

Best Solution:

Amazon GuardDuty with EventBridge and Lambda: This combination ensures continuous threat detection and automated response by updating AWS WAF rules based on GuardDuty findings.

Amazon GuardDuty

Amazon EventBridge

AWS Lambda

Total 886 questions
Go to page: of 89