ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 73

Question list
Search
Search

List of questions

Search

Related questions











A company has an on-premises business application that generates hundreds of files each day. These files are stored on an SMB file share and require a low-latency connection to the application servers. A new company policy states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.

The application development team does not have time to make the necessary code modifications to move the application to AWS Which service should a solutions architect recommend to allow the application to copy files to AWS?

A.
Amazon Elastic File System (Amazon EFS)
A.
Amazon Elastic File System (Amazon EFS)
Answers
B.
Amazon FSx for Windows File Server
B.
Amazon FSx for Windows File Server
Answers
C.
AWS Snowball
C.
AWS Snowball
Answers
D.
AWS Storage Gateway
D.
AWS Storage Gateway
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The company needs to copy files generated by an on-premises application to AWS without modifying the application code. The files are stored on an SMB file share and require a low-latency connection to the application servers.

Analysis of Options:

Amazon Elastic File System (EFS): EFS is designed for Linux-based workloads and does not natively support SMB file shares.

Amazon FSx for Windows File Server: FSx supports SMB file shares but would require changes to the application or additional infrastructure to connect on-premises systems.

AWS Snowball: Suitable for large data transfers but not for continuous, low-latency file copying.

AWS Storage Gateway: Provides a hybrid cloud storage solution, supporting SMB file shares and enabling seamless copying of files to AWS without requiring changes to the application.

Best Solution:

AWS Storage Gateway: This service meets the requirement for a low-latency, seamless file transfer solution from on-premises to AWS without modifying the application code.

AWS Storage Gateway

Amazon FSx for Windows File Server

A company wants to migrate an application to AWS. The company wants to increase the application's current availability The company wants to use AWS WAF in the application's architecture.

Which solution will meet these requirements?

A.
Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the ALB.
A.
Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the ALB.
Answers
B.
Create a cluster placement group that contains multiple Amazon EC2 instances that hosts the application Configure an Application Load Balancer and set the EC2 instances as the targets. Connect a WAF to the placement group.
B.
Create a cluster placement group that contains multiple Amazon EC2 instances that hosts the application Configure an Application Load Balancer and set the EC2 instances as the targets. Connect a WAF to the placement group.
Answers
C.
Create two Amazon EC2 instances that host the application across two Availability Zones. Configure the EC2 instances as the targets of an Application Load Balancer (ALB). Connect a WAF to the ALB.
C.
Create two Amazon EC2 instances that host the application across two Availability Zones. Configure the EC2 instances as the targets of an Application Load Balancer (ALB). Connect a WAF to the ALB.
Answers
D.
Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target Connect a WAF to the Auto Scaling group.
D.
Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target Connect a WAF to the Auto Scaling group.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company wants to migrate an application to AWS, increase its availability, and use AWS WAF in the architecture.

Analysis of Options:

Auto Scaling group with ALB and WAF: This option provides high availability by distributing instances across multiple Availability Zones. The ALB ensures even traffic distribution, and AWS WAF provides security at the application layer.

Cluster placement group with ALB and WAF: Cluster placement groups are for low-latency networking within a single AZ, which does not provide the high availability across AZs.

Two EC2 instances with ALB and WAF: This setup provides some availability but does not scale automatically, missing the benefits of an Auto Scaling group.

Auto Scaling group with WAF directly: AWS WAF cannot be directly connected to an Auto Scaling group; it needs to be attached to an ALB, CloudFront distribution, or API Gateway.

Best Solution:

Auto Scaling group with ALB and WAF: This configuration ensures high availability, scalability, and security, meeting all the requirements effectively.

Amazon EC2 Auto Scaling

Application Load Balancer

AWS WAF

A company runs a stateful production application on Amazon EC2 instances The application requires at least two EC2 instances to always be running.

A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto Scaling group of EC2 instances.

Which set of additional steps should the solutions architect take to meet these requirements?

A.
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
A.
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
Answers
B.
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two On-Demand Instances in a second Availability Zone
B.
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two On-Demand Instances in a second Availability Zone
Answers
C.
Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
C.
Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
Answers
D.
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in a second Availability Zone.
D.
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in a second Availability Zone.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The application is stateful and requires at least two EC2 instances to be running at all times, with a highly available and fault-tolerant architecture.

Analysis of Options:

Minimum capacity of two with instances in separate AZs: Ensures high availability by distributing instances across multiple AZs, fulfilling the requirement of always having two instances running.

Minimum capacity of four: Provides redundancy but is more than what is required and increases cost without additional benefit.

Spot Instances: Not suitable for a stateful application requiring guaranteed availability, as Spot Instances can be terminated at any time.

Combination of On-Demand and Spot Instances: Mixing instance types might provide cost savings but does not ensure the required availability for a stateful application.

Best Solution:

Minimum capacity of two with instances in separate AZs: This setup ensures high availability and meets the requirement with the least cost and complexity.

Amazon EC2 Auto Scaling

High Availability for Amazon EC2

A company manages a data lake in an Amazon S3 bucket that numerous applications access The S3 bucket contains a unique prefix for each application The company wants to restrict each application to its specific prefix and to have granular control of the objects under each prefix.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create dedicated S3 access points and access point policies for each application.
A.
Create dedicated S3 access points and access point policies for each application.
Answers
B.
Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket
B.
Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket
Answers
C.
Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication rules by prefix
C.
Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication rules by prefix
Answers
D.
Replicate the objects in the S3 bucket to new S3 buckets for each application Create dedicated S3 access points for each application
D.
Replicate the objects in the S3 bucket to new S3 buckets for each application Create dedicated S3 access points for each application
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company wants to restrict each application to its specific prefix in an S3 bucket and have granular control over the objects under each prefix.

Analysis of Options:

Dedicated S3 Access Points: Provides a scalable and flexible way to manage access to S3 buckets, allowing specific policies to be attached to each access point, thereby controlling access at the prefix level.

S3 Batch Operations: Suitable for large-scale changes but involves more operational overhead and does not dynamically control future access.

Replication to new S3 buckets: Involves unnecessary duplication of data and increased storage costs, and operational overhead for managing multiple buckets.

Combination of replication and access points: Adds unnecessary complexity and overhead compared to using access points directly.

Best Solution:

Dedicated S3 Access Points: This provides the least operational overhead while meeting the requirements for prefix-level access control and granular management.

Amazon S3 Access Points

A company has released a new version of its production application The company's workload uses Amazon EC2. AWS Lambda. AWS Fargate. and Amazon SageMaker. The company wants to cost optimize the workload now that usage is at a steady state. The company wants to cover the most services with the fewest savings plans. Which combination of savings plans will meet these requirements? (Select TWO.)

A.
Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
A.
Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
Answers
B.
Purchase a Compute Savings Plan for Amazon EC2. Lambda, and SageMaker
B.
Purchase a Compute Savings Plan for Amazon EC2. Lambda, and SageMaker
Answers
C.
Purchase a SageMaker Savings Plan
C.
Purchase a SageMaker Savings Plan
Answers
D.
Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
D.
Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
Answers
E.
Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate
E.
Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate
Answers
Suggested answer: B, D

Explanation:

Understanding the Requirement: The company wants to cost-optimize their workload that uses EC2, Lambda, Fargate, and SageMaker, covering the most services with the fewest savings plans.

Analysis of Options:

EC2 Instance Savings Plan: Limited to EC2 and SageMaker, missing coverage for Lambda and Fargate.

Compute Savings Plan: Provides the most flexibility, covering a broad range of compute services, including EC2, Lambda, Fargate, and SageMaker.

SageMaker Savings Plan: Specifically for SageMaker, missing coverage for EC2, Lambda, and Fargate.

Combination of plans: The Compute Savings Plan is versatile and can be combined to cover different services efficiently.

Best Solution:

Compute Savings Plan for EC2, Lambda, and SageMaker: Covers the primary compute services efficiently.

Compute Savings Plan for Lambda, Fargate, and EC2: Covers the remaining services, ensuring broad coverage with minimal plans.

AWS Savings Plans

Compute Savings Plans

A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information.

The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires The components of the order processing system should be loosely coupled to accommodate future business changes.

Which solution will meet these requirements?

A.
Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue
A.
Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue
Answers
B.
Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.
B.
Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.
Answers
C.
Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
C.
Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
Answers
D.
Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads.
D.
Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads.
Answers
Suggested answer: C

Explanation:

Understanding the Requirement: The order processing system requires multiple independent validation steps, each handled by separate Lambda functions, with each function accessing only the subset of order information it needs. The system should be loosely coupled to accommodate future changes.

Analysis of Options:

Amazon SQS with a new Lambda function for transformation: This involves additional complexity in creating and managing multiple SQS queues and an extra Lambda function for data transformation.

Amazon SNS with message filtering: While SNS supports message filtering, it is more suited for pub/sub messaging patterns rather than event-driven processing requiring fine-grained control over the data sent to each function.

Amazon EventBridge with input transformers: EventBridge is designed for event-driven architectures, allowing for fine-grained control with input transformers that can modify and filter the event data sent to each target Lambda function, ensuring each function receives only the necessary information.

SQS with synchronous Lambda invocations: This approach adds unnecessary complexity with synchronous invocations and is not ideal for an event-driven, loosely coupled architecture.

Best Solution:

Amazon EventBridge with input transformers: This option provides the most flexible, scalable, and loosely coupled architecture, enabling each Lambda function to receive only the required subset of data.

Amazon EventBridge

EventBridge Input Transformer

A large international university has deployed all of its compute services in the AWS Cloud These services include Amazon EC2. Amazon RDS. and Amazon DynamoDB. The university currently relies on many custom scripts to back up its infrastructure. However, the university wants to centralize management and automate data backups as much as possible by using AWS native options.

Which solution will meet these requirements?

A.
Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
A.
Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
Answers
B.
Use AWS Backup to configure and monitor all backups for the services in use
B.
Use AWS Backup to configure and monitor all backups for the services in use
Answers
C.
Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
C.
Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
Answers
D.
Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup tasks.
D.
Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup tasks.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The university wants to centralize management and automate backups for its AWS services (EC2, RDS, and DynamoDB), reducing reliance on custom scripts.

Analysis of Options:

Third-party backup software with AWS Storage Gateway: This solution introduces external dependencies and adds complexity compared to using native AWS services.

AWS Backup: Provides a centralized, fully managed service to automate and manage backups across various AWS services, including EC2, RDS, and DynamoDB.

AWS Config: Primarily used for compliance and configuration monitoring, not for backup management.

AWS Systems Manager State Manager: Useful for configuration management but not specifically designed for managing backups.

Best Solution:

AWS Backup: This service offers the necessary functionality to centralize and automate backups, providing a streamlined and integrated solution with minimal effort.

AWS Backup

A company stores several petabytes of data across multiple AWS accounts The company uses AWS Lake Formation to manage its data lake The company's data science team wants to securely share selective data from its accounts with the company's engineering team for analytical purposes.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Copy the required data to a common account. Create an 1AM access role in that account Grant access by specifying a permission policy that includes users from the engineering team accounts as trusted entities.
A.
Copy the required data to a common account. Create an 1AM access role in that account Grant access by specifying a permission policy that includes users from the engineering team accounts as trusted entities.
Answers
B.
Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users to access the data.
B.
Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users to access the data.
Answers
C.
Use AWS Data Exchange to privately publish the required data to the required engineering team accounts
C.
Use AWS Data Exchange to privately publish the required data to the required engineering team accounts
Answers
D.
Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering team accounts
D.
Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering team accounts
Answers
Suggested answer: D

Explanation:

Understanding the Requirement: The data science team needs to securely share selective data with the engineering team across multiple AWS accounts with minimal operational overhead.

Analysis of Options:

Copy data to a common account: Involves data duplication and increased storage costs, and requires managing additional permissions.

Lake Formation permissions Grant command: This method can be effective but may involve significant operational overhead if managing permissions across multiple accounts and datasets manually.

AWS Data Exchange: Designed for sharing data externally or between organizations, which adds unnecessary complexity for internal sharing within the same organization.

Lake Formation tag-based access control: Provides a scalable and efficient way to manage access permissions based on tags, allowing fine-grained control and simplified management across accounts.

Best Solution:

Lake Formation tag-based access control: This solution meets the requirements with the least operational overhead, allowing efficient management of cross-account permissions and secure data sharing.

AWS Lake Formation

Tag-based access control

A company stores sensitive data in Amazon S3 A solutions architect needs to create an encryption solution The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.

Which solution will meet these requirements?

A.
Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data
A.
Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data
Answers
B.
Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
B.
Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
Answers
C.
Create an AWS managed key by using AWS Key Management Service {AWS KMS) Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C.
Create an AWS managed key by using AWS Key Management Service {AWS KMS) Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
Answers
D.
Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
D.
Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The company needs to control the creation, rotation, and disabling of encryption keys for data stored in S3 with minimal effort.

Analysis of Options:

SSE-S3: Provides server-side encryption using S3 managed keys but does not offer full control over key management.

Customer managed key with AWS KMS (SSE-KMS): Allows the company to fully control key creation, rotation, and disabling, providing a high level of security and compliance.

AWS managed key with AWS KMS (SSE-KMS): While it provides some control, it does not offer the same level of granularity as customer-managed keys.

EC2 instance encryption and re-upload: This approach is operationally intensive and does not leverage AWS managed services for efficient key management.

Best Solution:

Customer managed key with AWS KMS (SSE-KMS): This solution meets the requirement for full control over encryption keys with minimal operational overhead, leveraging AWS managed services for secure key management.

AWS Key Management Service (KMS)

Amazon S3 Encryption

A company runs an application that uses Amazon RDS for PostgreSQL The application receives traffic only on weekdays during business hours The company wants to optimize costs and reduce operational overhead based on this usage.

Which solution will meet these requirements?

A.
Use the Instance Scheduler on AWS to configure start and stop schedules.
A.
Use the Instance Scheduler on AWS to configure start and stop schedules.
Answers
B.
Turn off automatic backups. Create weekly manual snapshots of the database.
B.
Turn off automatic backups. Create weekly manual snapshots of the database.
Answers
C.
Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
C.
Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
Answers
D.
Purchase All Upfront reserved DB instances
D.
Purchase All Upfront reserved DB instances
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company wants to optimize costs and reduce operational overhead for an RDS for PostgreSQL database that only needs to be active during business hours on weekdays.

Analysis of Options:

Instance Scheduler on AWS: Allows for automated start and stop schedules based on specified times, ideal for resources only needed during certain hours. This directly optimizes costs by running the database only when needed.

Turn off automatic backups and create weekly snapshots: Does not address the requirement of reducing operational overhead and optimizing runtime costs.

Custom Lambda function: This could work but adds unnecessary complexity compared to using the Instance Scheduler.

All Upfront Reserved DB Instances: While this reduces costs, it does not optimize for usage patterns that require the database only during specific hours.

Best Solution:

Instance Scheduler on AWS: This option effectively manages the database runtime based on the specified schedule, reducing costs and operational overhead.

Instance Scheduler on AWS

Total 886 questions
Go to page: of 89