Amazon SAA-C03 Practice Test - Questions Answers, Page 75
List of questions
Question 741

A company uses AWS to host its public ecommerce website. The website uses an AWS Global Accelerator accelerator for traffic from the internet. Tt\e Global Accelerator accelerator forwards the traffic to an Application Load Balancer (ALB) that is the entry point for an Auto Scaling group.
The company recently identified a ODoS attack on the website. The company needs a solution to mitigate future attacks.
Which solution will meet these requirements with the LEAST implementation effort?
Explanation:
Understanding the Requirement: The company needs to mitigate DDoS attacks on its website, which uses AWS Global Accelerator to route traffic to an Application Load Balancer (ALB).
Analysis of Options:
AWS WAF on Global Accelerator: Allows for centralized protection and can block traffic based on rate-based rules, effectively mitigating DDoS attacks with minimal implementation effort.
Lambda Function and VPC Network ACL: Requires custom implementation and ongoing management, increasing complexity and effort.
AWS WAF on ALB: Provides protection but involves additional configuration and management at the ALB level.
CloudFront Distribution in front of Global Accelerator: Adds unnecessary complexity and changes the current traffic flow setup.
Best Solution:
AWS WAF on Global Accelerator: This provides the required protection with the least implementation effort, ensuring effective DDoS mitigation and maintaining the existing architecture.
AWS WAF
Using AWS WAF with AWS Global Accelerator
Question 742

A company runs an application on Amazon EC2 Instances in a private subnet. The application needs to store and retrieve data in Amazon S3 buckets. According to regulatory requirements, the data must not travel across the public internet.
What should a solutions architect do to meet these requirements MOST cost-effectively?
Explanation:
Understanding the Requirement: The application running in a private subnet needs to store and retrieve data from S3 without data traveling over the public internet.
Analysis of Options:
NAT Gateway: Allows private subnets to access the internet but incurs additional costs and still routes traffic through the public internet.
AWS Storage Gateway: Provides hybrid cloud storage solutions but is not the most cost-effective for direct S3 access from within the VPC.
S3 Interface Endpoint: Provides private access to S3 but is generally used for specific use cases where more granular control is required, which might be overkill and more expensive.
S3 Gateway Endpoint: Provides private, cost-effective access to S3 from within the VPC without routing traffic through the public internet.
Best Solution:
S3 Gateway Endpoint: This option meets the requirements for secure, private access to S3 from a private subnet most cost-effectively.
Amazon VPC Endpoints
Gateway Endpoints
Question 743

A development team uses multiple AWS accounts for its development, staging, and production environments. Team members have been launching large Amazon EC2 instances that are underutilized. A solutions architect must prevent large instances from being launched in all accounts.
How can the solutions architect meet this requirement with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The development team needs to prevent the launch of large EC2 instances across multiple AWS accounts used for development, staging, and production environments.
Analysis of Options:
IAM Policies: Would need to be applied individually to each user in every account, leading to significant operational overhead.
AWS Resource Access Manager: Used for sharing resources, not for enforcing restrictions on resource creation.
IAM Role in Each Account: Requires creating and managing roles in each account, leading to higher operational overhead compared to using a centralized approach.
Service Control Policy (SCP) with AWS Organizations: Provides a centralized way to enforce policies across multiple AWS accounts, ensuring that large EC2 instances cannot be launched in any account.
Best Solution:
Service Control Policy (SCP) with AWS Organizations: This solution offers the least operational overhead by allowing centralized management and enforcement of policies across all accounts, effectively preventing the launch of large EC2 instances.
AWS Organizations and SCPs
Question 744

A company is developing an application to support customer demands. The company wants to deploy the application on multiple Amazon EC2 Nitro-based instances within the same Availability Zone. The company also wants to give the application the ability to write to multiple block storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application availability.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The application needs to write to multiple block storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher availability.
Analysis of Options:
General Purpose SSD (gp3) with Multi-Attach: Supports Multi-Attach but does not provide the highest performance required for critical applications.
Throughput Optimized HDD (st1) with Multi-Attach: Not suitable for applications requiring high performance and low latency.
Provisioned IOPS SSD (io2) with Multi-Attach: Provides high performance and durability, suitable for applications requiring simultaneous writes and high availability.
General Purpose SSD (gp2) with Multi-Attach: Similar to gp3 but with less flexibility and performance.
Best Solution:
Provisioned IOPS SSD (io2) with Multi-Attach: This solution ensures the highest performance and availability for the application by allowing multiple EC2 instances to attach to and write to the same EBS volume simultaneously.
Amazon EBS Multi-Attach
Provisioned IOPS SSD (io2)
Question 745

An online photo-sharing company stores Hs photos in an Amazon S3 bucket that exists in the us-west-1 Region. The company needs to store a copy of all new photos in the us-east-1 Region.
Which solution will meet this requirement with the LEAST operational effort?
Explanation:
Understanding the Requirement: The company needs to store a copy of all new photos in the us-east-1 Region from an S3 bucket in the us-west-1 Region.
Analysis of Options:
Cross-Region Replication: Automatically replicates objects across regions with minimal operational effort once configured.
CORS Configuration: Used for allowing resources on a web page to be requested from another domain, not for replication.
S3 Lifecycle Rule: Manages the transition of objects between storage classes within the same bucket, not for cross-region replication.
S3 Event Notifications with Lambda: Requires additional configuration and management compared to Cross-Region Replication.
Best Solution:
S3 Cross-Region Replication: This solution provides an automated and efficient way to replicate objects to another region, meeting the requirement with the least operational effort.
Amazon S3 Cross-Region Replication
Question 746

A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for frequent analysis, retained tor an additional 60 days tor backup purposes, and deleted 90 days after creation.
Which solution will meet these requirements MOST cost-effectively?
Explanation:
Understanding the Requirement: The company needs logs to be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup, and deleted after 90 days.
Analysis of Options:
Transition to S3 Standard after 30 days: Keeps logs in the same high-availability storage, not cost-effective.
Transition to S3 Standard-IA, then Glacier Flexible Retrieval after 90 days: Adds unnecessary cost and complexity since objects need to be accessible for only 30 days and then retained for 60 days.
Transition to Glacier Flexible Retrieval after 30 days: Not suitable for frequent access required in the first 30 days.
Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: Provides cost-effective storage for infrequently accessed logs after the initial 30-day period, then moves to the cheapest long-term storage before deletion.
Best Solution:
Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: This solution meets the requirements for high availability, cost-effective storage for backup, and scheduled deletion with the least cost.
Amazon S3 Storage Classes
Managing your storage lifecycle
Question 747

A company runs an application in a VPC with public and private subnets. The VPC extends across multiple Availability Zones. The application runs on Amazon EC2 instances in private subnets. The application uses an Amazon Simple Queue Service (Amazon SOS) queue.
A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and the SOS queue
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The application running on EC2 instances in private subnets needs to securely connect to an Amazon SQS queue without exposing traffic to the public internet.
Analysis of Options:
Interface VPC Endpoint in Private Subnets: Allows private, secure connectivity to SQS without using the public internet. Configuring security groups ensures controlled access from EC2 instances.
Interface VPC Endpoint in Public Subnets: Not necessary for private EC2 instances and exposes additional security risks.
Gateway Endpoint: Gateway endpoints are not supported for SQS; they are used for services like S3 and DynamoDB.
NAT Gateway with IAM Role: Increases costs and complexity compared to using an interface VPC endpoint directly.
Best Solution:
Interface VPC Endpoint in Private Subnets: This option ensures secure, private connectivity to SQS, meeting the requirement with minimal complexity and optimal security.
VPC Endpoints
Amazon SQS and VPC Endpoints
Question 748

A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be transmitted over the public internet. Servers in the company's on-premises data center will consume the output from an application that runs on the LC2 instances.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: EC2 instances need to upload data to S3 without using the public internet, and on-premises servers need to consume this data.
Analysis of Options:
Interface VPC Endpoint for EC2: Not relevant for accessing S3.
Gateway VPC Endpoint for S3 and Direct Connect: Provides private connectivity from EC2 instances to S3 and from on-premises to AWS, ensuring compliance with the requirement to avoid public internet.
Transit Gateway and Site-to-Site VPN: Adds unnecessary complexity and does not provide the same level of performance as Direct Connect.
Proxy EC2 Instances with NAT Gateways: Increases complexity and costs compared to a direct connection using VPC endpoints and Direct Connect.
Best Solution:
Gateway VPC Endpoint for S3 and Direct Connect: This solution ensures secure, private data transfer both within AWS and between on-premises and AWS, meeting the compliance requirements effectively.
Amazon VPC Endpoints for S3
AWS Direct Connect
Question 749

A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications that run on Amazon EC2 instances access the file shares The company needs a storage disaster recovery (OR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be accessed by using the same protocols as the primary Region.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The company needs a disaster recovery solution for FSx for NetApp ONTAP in a secondary region, accessible using the same protocols (CIFS and NFS).
Analysis of Options:
Lambda Function and S3: Involves copying data to S3, which changes the access protocols and increases operational overhead.
AWS Backup: Suitable for backup and restore but not for real-time or near-real-time replication for disaster recovery.
FSx for ONTAP with SnapMirror: SnapMirror provides efficient replication between ONTAP instances, maintaining access protocols and requiring minimal operational overhead.
Amazon EFS: Does not support CIFS and requires migrating data, increasing complexity and changing access protocols.
Best Solution:
FSx for ONTAP with SnapMirror: This solution ensures seamless disaster recovery with the same access protocols and minimal operational overhead.
Amazon FSx for NetApp ONTAP
NetApp SnapMirror
Question 750

A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load Balancer in a VPC. An Amazon RDS for MySQL DB instance contains the data The company needs the ability to automatically detect and respond to suspicious or unexpected behavior in its AWS environment. The company already has added AWS WAF to its architecture.
What should a solutions architect do next to protect against threats?
Explanation:
Understanding the Requirement: The company needs to automatically detect and respond to suspicious or unexpected behavior in its AWS environment, beyond the existing AWS WAF setup.
Analysis of Options:
Amazon GuardDuty: Provides continuous monitoring and threat detection across AWS accounts and resources, including integration with AWS WAF for automated response.
AWS Firewall Manager: Manages firewall rules across multiple accounts but is more focused on central management than threat detection.
Amazon Inspector: Focuses on security assessments and vulnerability management rather than real-time threat detection.
Amazon Macie: Primarily used for data security and privacy, not comprehensive threat detection.
Best Solution:
Amazon GuardDuty with EventBridge and Lambda: This combination ensures continuous threat detection and automated response by updating AWS WAF rules based on GuardDuty findings.
Amazon GuardDuty
Amazon EventBridge
AWS Lambda
Question