Amazon SAA-C03 Practice Test - Questions Answers, Page 73
List of questions
Question 721

A company has an on-premises business application that generates hundreds of files each day. These files are stored on an SMB file share and require a low-latency connection to the application servers. A new company policy states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.
The application development team does not have time to make the necessary code modifications to move the application to AWS Which service should a solutions architect recommend to allow the application to copy files to AWS?
Explanation:
Understanding the Requirement: The company needs to copy files generated by an on-premises application to AWS without modifying the application code. The files are stored on an SMB file share and require a low-latency connection to the application servers.
Analysis of Options:
Amazon Elastic File System (EFS): EFS is designed for Linux-based workloads and does not natively support SMB file shares.
Amazon FSx for Windows File Server: FSx supports SMB file shares but would require changes to the application or additional infrastructure to connect on-premises systems.
AWS Snowball: Suitable for large data transfers but not for continuous, low-latency file copying.
AWS Storage Gateway: Provides a hybrid cloud storage solution, supporting SMB file shares and enabling seamless copying of files to AWS without requiring changes to the application.
Best Solution:
AWS Storage Gateway: This service meets the requirement for a low-latency, seamless file transfer solution from on-premises to AWS without modifying the application code.
AWS Storage Gateway
Amazon FSx for Windows File Server
Question 722

A company wants to migrate an application to AWS. The company wants to increase the application's current availability The company wants to use AWS WAF in the application's architecture.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The company wants to migrate an application to AWS, increase its availability, and use AWS WAF in the architecture.
Analysis of Options:
Auto Scaling group with ALB and WAF: This option provides high availability by distributing instances across multiple Availability Zones. The ALB ensures even traffic distribution, and AWS WAF provides security at the application layer.
Cluster placement group with ALB and WAF: Cluster placement groups are for low-latency networking within a single AZ, which does not provide the high availability across AZs.
Two EC2 instances with ALB and WAF: This setup provides some availability but does not scale automatically, missing the benefits of an Auto Scaling group.
Auto Scaling group with WAF directly: AWS WAF cannot be directly connected to an Auto Scaling group; it needs to be attached to an ALB, CloudFront distribution, or API Gateway.
Best Solution:
Auto Scaling group with ALB and WAF: This configuration ensures high availability, scalability, and security, meeting all the requirements effectively.
Amazon EC2 Auto Scaling
Application Load Balancer
AWS WAF
Question 723

A company runs a stateful production application on Amazon EC2 instances The application requires at least two EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
Explanation:
Understanding the Requirement: The application is stateful and requires at least two EC2 instances to be running at all times, with a highly available and fault-tolerant architecture.
Analysis of Options:
Minimum capacity of two with instances in separate AZs: Ensures high availability by distributing instances across multiple AZs, fulfilling the requirement of always having two instances running.
Minimum capacity of four: Provides redundancy but is more than what is required and increases cost without additional benefit.
Spot Instances: Not suitable for a stateful application requiring guaranteed availability, as Spot Instances can be terminated at any time.
Combination of On-Demand and Spot Instances: Mixing instance types might provide cost savings but does not ensure the required availability for a stateful application.
Best Solution:
Minimum capacity of two with instances in separate AZs: This setup ensures high availability and meets the requirement with the least cost and complexity.
Amazon EC2 Auto Scaling
High Availability for Amazon EC2
Question 724

A company manages a data lake in an Amazon S3 bucket that numerous applications access The S3 bucket contains a unique prefix for each application The company wants to restrict each application to its specific prefix and to have granular control of the objects under each prefix.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The company wants to restrict each application to its specific prefix in an S3 bucket and have granular control over the objects under each prefix.
Analysis of Options:
Dedicated S3 Access Points: Provides a scalable and flexible way to manage access to S3 buckets, allowing specific policies to be attached to each access point, thereby controlling access at the prefix level.
S3 Batch Operations: Suitable for large-scale changes but involves more operational overhead and does not dynamically control future access.
Replication to new S3 buckets: Involves unnecessary duplication of data and increased storage costs, and operational overhead for managing multiple buckets.
Combination of replication and access points: Adds unnecessary complexity and overhead compared to using access points directly.
Best Solution:
Dedicated S3 Access Points: This provides the least operational overhead while meeting the requirements for prefix-level access control and granular management.
Amazon S3 Access Points
Question 725

A company has released a new version of its production application The company's workload uses Amazon EC2. AWS Lambda. AWS Fargate. and Amazon SageMaker. The company wants to cost optimize the workload now that usage is at a steady state. The company wants to cover the most services with the fewest savings plans. Which combination of savings plans will meet these requirements? (Select TWO.)
Explanation:
Understanding the Requirement: The company wants to cost-optimize their workload that uses EC2, Lambda, Fargate, and SageMaker, covering the most services with the fewest savings plans.
Analysis of Options:
EC2 Instance Savings Plan: Limited to EC2 and SageMaker, missing coverage for Lambda and Fargate.
Compute Savings Plan: Provides the most flexibility, covering a broad range of compute services, including EC2, Lambda, Fargate, and SageMaker.
SageMaker Savings Plan: Specifically for SageMaker, missing coverage for EC2, Lambda, and Fargate.
Combination of plans: The Compute Savings Plan is versatile and can be combined to cover different services efficiently.
Best Solution:
Compute Savings Plan for EC2, Lambda, and SageMaker: Covers the primary compute services efficiently.
Compute Savings Plan for Lambda, Fargate, and EC2: Covers the remaining services, ensuring broad coverage with minimal plans.
AWS Savings Plans
Compute Savings Plans
Question 726

A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information.
The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires The components of the order processing system should be loosely coupled to accommodate future business changes.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The order processing system requires multiple independent validation steps, each handled by separate Lambda functions, with each function accessing only the subset of order information it needs. The system should be loosely coupled to accommodate future changes.
Analysis of Options:
Amazon SQS with a new Lambda function for transformation: This involves additional complexity in creating and managing multiple SQS queues and an extra Lambda function for data transformation.
Amazon SNS with message filtering: While SNS supports message filtering, it is more suited for pub/sub messaging patterns rather than event-driven processing requiring fine-grained control over the data sent to each function.
Amazon EventBridge with input transformers: EventBridge is designed for event-driven architectures, allowing for fine-grained control with input transformers that can modify and filter the event data sent to each target Lambda function, ensuring each function receives only the necessary information.
SQS with synchronous Lambda invocations: This approach adds unnecessary complexity with synchronous invocations and is not ideal for an event-driven, loosely coupled architecture.
Best Solution:
Amazon EventBridge with input transformers: This option provides the most flexible, scalable, and loosely coupled architecture, enabling each Lambda function to receive only the required subset of data.
Amazon EventBridge
EventBridge Input Transformer
Question 727

A large international university has deployed all of its compute services in the AWS Cloud These services include Amazon EC2. Amazon RDS. and Amazon DynamoDB. The university currently relies on many custom scripts to back up its infrastructure. However, the university wants to centralize management and automate data backups as much as possible by using AWS native options.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The university wants to centralize management and automate backups for its AWS services (EC2, RDS, and DynamoDB), reducing reliance on custom scripts.
Analysis of Options:
Third-party backup software with AWS Storage Gateway: This solution introduces external dependencies and adds complexity compared to using native AWS services.
AWS Backup: Provides a centralized, fully managed service to automate and manage backups across various AWS services, including EC2, RDS, and DynamoDB.
AWS Config: Primarily used for compliance and configuration monitoring, not for backup management.
AWS Systems Manager State Manager: Useful for configuration management but not specifically designed for managing backups.
Best Solution:
AWS Backup: This service offers the necessary functionality to centralize and automate backups, providing a streamlined and integrated solution with minimal effort.
AWS Backup
Question 728

A company stores several petabytes of data across multiple AWS accounts The company uses AWS Lake Formation to manage its data lake The company's data science team wants to securely share selective data from its accounts with the company's engineering team for analytical purposes.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The data science team needs to securely share selective data with the engineering team across multiple AWS accounts with minimal operational overhead.
Analysis of Options:
Copy data to a common account: Involves data duplication and increased storage costs, and requires managing additional permissions.
Lake Formation permissions Grant command: This method can be effective but may involve significant operational overhead if managing permissions across multiple accounts and datasets manually.
AWS Data Exchange: Designed for sharing data externally or between organizations, which adds unnecessary complexity for internal sharing within the same organization.
Lake Formation tag-based access control: Provides a scalable and efficient way to manage access permissions based on tags, allowing fine-grained control and simplified management across accounts.
Best Solution:
Lake Formation tag-based access control: This solution meets the requirements with the least operational overhead, allowing efficient management of cross-account permissions and secure data sharing.
AWS Lake Formation
Tag-based access control
Question 729

A company stores sensitive data in Amazon S3 A solutions architect needs to create an encryption solution The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The company needs to control the creation, rotation, and disabling of encryption keys for data stored in S3 with minimal effort.
Analysis of Options:
SSE-S3: Provides server-side encryption using S3 managed keys but does not offer full control over key management.
Customer managed key with AWS KMS (SSE-KMS): Allows the company to fully control key creation, rotation, and disabling, providing a high level of security and compliance.
AWS managed key with AWS KMS (SSE-KMS): While it provides some control, it does not offer the same level of granularity as customer-managed keys.
EC2 instance encryption and re-upload: This approach is operationally intensive and does not leverage AWS managed services for efficient key management.
Best Solution:
Customer managed key with AWS KMS (SSE-KMS): This solution meets the requirement for full control over encryption keys with minimal operational overhead, leveraging AWS managed services for secure key management.
AWS Key Management Service (KMS)
Amazon S3 Encryption
Question 730

A company runs an application that uses Amazon RDS for PostgreSQL The application receives traffic only on weekdays during business hours The company wants to optimize costs and reduce operational overhead based on this usage.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The company wants to optimize costs and reduce operational overhead for an RDS for PostgreSQL database that only needs to be active during business hours on weekdays.
Analysis of Options:
Instance Scheduler on AWS: Allows for automated start and stop schedules based on specified times, ideal for resources only needed during certain hours. This directly optimizes costs by running the database only when needed.
Turn off automatic backups and create weekly snapshots: Does not address the requirement of reducing operational overhead and optimizing runtime costs.
Custom Lambda function: This could work but adds unnecessary complexity compared to using the Instance Scheduler.
All Upfront Reserved DB Instances: While this reduces costs, it does not optimize for usage patterns that require the database only during specific hours.
Best Solution:
Instance Scheduler on AWS: This option effectively manages the database runtime based on the specified schedule, reducing costs and operational overhead.
Instance Scheduler on AWS
Question