ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 70

Question list
Search
Search

List of questions

Search

Related questions











A company has an internal application that runs on Amazon EC2 instances in an Auto Scaling group. The EC2 instances are compute optimized and use Amazon Elastic Block Store (Amazon EBS) volumes.

The company wants to identify cost optimizations across the EC2 instances, the Auto Scaling group, and the EBS volumes.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
A.
Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
Answers
B.
Create new Amazon CloudWatch billing alerts. Check the alert statuses for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
B.
Create new Amazon CloudWatch billing alerts. Check the alert statuses for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
Answers
C.
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
C.
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
Answers
D.
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances. Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the Auto Scaling group and the EBS volumes.
D.
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances. Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the Auto Scaling group and the EBS volumes.
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The company wants to identify cost optimizations for EC2 instances, the Auto Scaling group, and EBS volumes with high operational efficiency.

AWS Compute Optimizer: This service provides actionable recommendations to help optimize your AWS resources, including EC2 instances, Auto Scaling groups, and EBS volumes.

Cost Recommendations: Compute Optimizer analyzes the utilization of resources and provides specific recommendations for rightsizing or optimizing the configurations.

Operational Efficiency: Using Compute Optimizer automates the process of identifying cost-saving opportunities, reducing the need for manual analysis.

Implementation:

Enable AWS Compute Optimizer for your AWS account.

Review the recommendations provided for EC2 instances, Auto Scaling groups, and EBS volumes.

Conclusion: This solution provides a comprehensive, automated approach to identifying cost optimizations with minimal operational effort.

Reference

AWS Compute Optimizer: AWS Compute Optimizer Documentation

A company uses GPS trackers to document the migration patterns of thousands of sea turtles. The trackers check every 5 minutes to see if a turtle has moved more than 100 yards (91.4 meters). If a turtle has moved, its tracker sends the new coordinates to a web application running on three Amazon EC2 instances that are in multiple Availability Zones in one AWS Region.

Jgpently. the web application was overwhelmed while processing an unexpected volume of tracker data. Data was lost with no way to replay the events. A solutions ftitect must prevent this problem from happening again and needs a solution with the least operational overhead.

at should the solutions architect do to meet these requirements?

A.
Create an Amazon S3 bucket to store the data. Configure the application to scan for new data in the bucket for processing.
A.
Create an Amazon S3 bucket to store the data. Configure the application to scan for new data in the bucket for processing.
Answers
B.
Create an Amazon API Gateway endpoint to handle transmitted location coordinates. Use an AWS Lambda function to process each item concurrently.
B.
Create an Amazon API Gateway endpoint to handle transmitted location coordinates. Use an AWS Lambda function to process each item concurrently.
Answers
C.
Create an Amazon Simple Queue Service (Amazon SOS) queue to store the incoming data. Configure the application to poll for new messages for processing.
C.
Create an Amazon Simple Queue Service (Amazon SOS) queue to store the incoming data. Configure the application to poll for new messages for processing.
Answers
D.
Create an Amazon DynamoDB table to store transmitted location coordinates. Configure the application to query the table for new data for processing. Use TTL to remove data that has been processed.
D.
Create an Amazon DynamoDB table to store transmitted location coordinates. Configure the application to query the table for new data for processing. Use TTL to remove data that has been processed.
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The application was overwhelmed with unexpected data volume, leading to data loss and the need for a replay mechanism.

Amazon SQS Overview: SQS is a fully managed message queuing service that decouples and scales microservices, distributed systems, and serverless applications.

Data Decoupling: By using an SQS queue, the application can store incoming tracker data reliably and process it asynchronously, preventing data loss.

Implementation:

Create an SQS queue.

Modify the web application to send incoming data to the SQS queue.

Configure the application instances to poll the SQS queue and process the messages.

Conclusion: This solution meets the requirements with minimal operational overhead, ensuring data is not lost and can be processed at the application's own pace.

Reference

Amazon SQS: Amazon SQS Documentation

A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the ou data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.

What should the solutions architect do to reduce the overall data transfer costs?

A.
Place all the EC2 instances in an Auto Scaling group.
A.
Place all the EC2 instances in an Auto Scaling group.
Answers
B.
Place all the EC2 instances in the same AWS Region.
B.
Place all the EC2 instances in the same AWS Region.
Answers
C.
Place all the EC2 instances in the same Availability Zone.
C.
Place all the EC2 instances in the same Availability Zone.
Answers
D.
Place all the EC2 instances in private subnets in multiple Availability Zones.
D.
Place all the EC2 instances in private subnets in multiple Availability Zones.
Answers
Suggested answer: C

Explanation:

Requirement Analysis: The application involves batch processing of large data transfers between EC2 instances.

Data Transfer Costs: Data transfer within the same Availability Zone (AZ) is typically free, while cross-AZ transfers incur additional costs.

Implementation:

Launch all EC2 instances within the same Availability Zone.

Ensure the instances are part of the same subnet to facilitate seamless data transfer.

Conclusion: Placing all EC2 instances in the same AZ reduces data transfer costs significantly without affecting the application's functionality.

Reference

AWS Pricing: AWS Data Transfer Pricing

A company hosts an application in a private subnet. The company has already integrated the application with Amazon Cognito. The company uses an Amazon Cognito user pool to authenticate users.

The company needs to modify the application so the application can securely store user documents in an Amazon S3 bucket.

Which combination of steps will securely integrate Amazon S3 with the application? (Select TWO.)

A.
Create an Ama2on Cognito identity pool to generate secure Amazon S3 access tokens for users when they successfully log in.
A.
Create an Ama2on Cognito identity pool to generate secure Amazon S3 access tokens for users when they successfully log in.
Answers
B.
Use the existing Amazon Cognito user pool to generate Amazon S3 access tokens for users when they successfully log in.
B.
Use the existing Amazon Cognito user pool to generate Amazon S3 access tokens for users when they successfully log in.
Answers
C.
Create an Amazon S3 VPC endpoint in the same VPC where the company hosts the application.
C.
Create an Amazon S3 VPC endpoint in the same VPC where the company hosts the application.
Answers
D.
Create a NAT gateway in the VPC where the company hosts the application. Assign a policy to the S3 bucket to deny any request that is not initiated from Amazon Cognito.
D.
Create a NAT gateway in the VPC where the company hosts the application. Assign a policy to the S3 bucket to deny any request that is not initiated from Amazon Cognito.
Answers
E.
Attach a policy to the S3 bucket that allows access only from the users' IP addresses.
E.
Attach a policy to the S3 bucket that allows access only from the users' IP addresses.
Answers
Suggested answer: A, C

Explanation:

To securely integrate Amazon S3 with an application that uses Amazon Cognito for user authentication, the following two steps are essential:

Detailed Explanation:

Step 1: Create an Amazon Cognito Identity Pool (Option A)

Amazon Cognito Identity Pools allow users to obtain temporary AWS credentials to access AWS resources, such as Amazon S3, after successfully authenticating with the Cognito user pool. The identity pool bridges the gap between user authentication and AWS service access by generating temporary credentials using AWS Identity and Access Management (IAM).

Once a user logs in using the Cognito User Pool, the identity pool provides IAM roles with specific permissions that the application can use to access S3 securely. This ensures that each user has appropriate access controls while accessing the S3 bucket.

This is a secure way to ensure that users only have temporary and least-privilege access to the S3 bucket for their documents.

Step 2: Create an Amazon S3 VPC Endpoint (Option C)

By creating an Amazon S3 VPC endpoint, the company ensures that communication between the application (which is hosted in a private subnet) and the S3 bucket occurs over the AWS private network, without the need to traverse the internet. This enhances security and prevents exposure of data to public networks.

The VPC endpoint allows the application to access the S3 bucket privately and securely within the VPC. It also ensures that traffic stays within the AWS network, reducing attack surface and improving overall security.

Why the Other Options Are Incorrect:

Option B: This is incorrect because Amazon Cognito User Pools are used for user authentication, not for generating S3 access tokens. To provide S3 access, you need to use Amazon Cognito Identity Pools, which offer AWS credentials.

Option D: A NAT gateway is unnecessary in this scenario. Using a VPC endpoint for S3 access provides a more secure and cost-effective solution by keeping traffic within AWS.

Option E: Attaching a policy to restrict access based on IP addresses is not scalable or efficient. It would require managing users' dynamic IP addresses, which is not an effective security measure for this use case.

AWS

Reference:

Amazon Cognito Identity Pools

Amazon VPC Endpoints for S3

A company is using AWS DataSync to migrate millions of files from an on-premises system to AWS. The files are 10 KB in size on average.

The company wants to use Amazon S3 for file storage. For the first year after the migration the files will be accessed once or twice and must be immediately available. After 1 year the files must be archived for at least 7 years.

Which solution will meet these requirements MOST cost-effectively?

A.
Use an archive tool lo group the files into large objects. Use DataSync to migrate the objects. Store the objects in S3 Glacier Instant Retrieval for the first year. Use a lifecycle configuration to transition the files to S3 Glacier Deep Archive after 1 year with a retention period of 7 years.
A.
Use an archive tool lo group the files into large objects. Use DataSync to migrate the objects. Store the objects in S3 Glacier Instant Retrieval for the first year. Use a lifecycle configuration to transition the files to S3 Glacier Deep Archive after 1 year with a retention period of 7 years.
Answers
B.
Use an archive tool to group the files into large objects. Use DataSync to copy the objects to S3 Standard-Infrequent Access (S3 Standard-IA). Use a lifecycle configuration to transition the files to S3 Glacier Instant Retrieval after 1 year with a retention period of 7 years.
B.
Use an archive tool to group the files into large objects. Use DataSync to copy the objects to S3 Standard-Infrequent Access (S3 Standard-IA). Use a lifecycle configuration to transition the files to S3 Glacier Instant Retrieval after 1 year with a retention period of 7 years.
Answers
C.
Configure the destination storage class for the files as S3 Glacier Instant. Retrieval Use a lifecycle policy to transition the files to S3 Glacier Flexible Retrieval after 1 year with a retention period of 7 years.
C.
Configure the destination storage class for the files as S3 Glacier Instant. Retrieval Use a lifecycle policy to transition the files to S3 Glacier Flexible Retrieval after 1 year with a retention period of 7 years.
Answers
D.
Configure a DataSync task to transfer the files to S3 Standard-Infrequent Access (S3 Standard-IA) Use a lifecycle configuration to transition the files to S3. Deep Archive after 1 year with a retention period of 7 years.
D.
Configure a DataSync task to transfer the files to S3 Standard-Infrequent Access (S3 Standard-IA) Use a lifecycle configuration to transition the files to S3. Deep Archive after 1 year with a retention period of 7 years.
Answers
Suggested answer: A

A company sets up an organization in AWS Organizations that contains 10AWS accounts. A solutions architect must design a solution to provide access to the accounts for several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.

Which solution will meet these requirements?

A.
Create 1AM users for the employees in the required AWS accounts. Connect 1AM users to the existing IdP. Configure federated authentication for the 1AM users.
A.
Create 1AM users for the employees in the required AWS accounts. Connect 1AM users to the existing IdP. Configure federated authentication for the 1AM users.
Answers
B.
Set up AWS account root users with user email addresses and passwords that are synchronized from the existing IdP.
B.
Set up AWS account root users with user email addresses and passwords that are synchronized from the existing IdP.
Answers
C.
Configure AWS 1AM Identity Center Connect 1AM Identity Center to the existing IdP Provision users and groups from the existing IdP
C.
Configure AWS 1AM Identity Center Connect 1AM Identity Center to the existing IdP Provision users and groups from the existing IdP
Answers
D.
Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the existing IdP.
D.
Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the existing IdP.
Answers
Suggested answer: C

Explanation:

AWS IAM Identity Center:

IAM Identity Center provides centralized access management for multiple AWS accounts within an organization and integrates seamlessly with existing identity providers (IdPs) through SAML 2.0 federation.

It allows users to authenticate using their existing IdP credentials and gain access to AWS resources without the need to create and manage separate IAM users in each account.

IAM Identity Center also simplifies provisioning and de-provisioning users, as it can automatically synchronize users and groups from the external IdP to AWS, ensuring secure and managed access.

Integration with Existing IdP:

The solution involves configuring IAM Identity Center to connect to the company's IdP using SAML. This setup allows employees to log in with their existing credentials, reducing the complexity of managing separate AWS credentials.

Once connected, IAM Identity Center handles authentication and authorization, granting users access to the AWS accounts based on their assigned roles and permissions.

Why the Other Options Are Incorrect:

Option A: Creating separate IAM users for each employee is not scalable or efficient. Managing thousands of IAM users across multiple AWS accounts introduces unnecessary complexity and operational overhead.

Option B: Using AWS root users with synchronized passwords is a security risk and goes against AWS best practices. Root accounts should never be used for day-to-day operations.

Option D: AWS Resource Access Manager (RAM) is used for sharing AWS resources between accounts, not for federating access for users across accounts. It doesn't provide a solution for authentication via an external IdP.

AWS

Reference:

AWS IAM Identity Center

SAML 2.0 Integration with AWS IAM Identity Center

By setting up IAM Identity Center and connecting it to the existing IdP, the company can efficiently manage access for thousands of employees across multiple AWS accounts with a high degree of operational efficiency and security. Therefore, Option C is the best solution.

A company regularly uploads GB-sized files to Amazon S3. After Ihe company uploads the files, the company uses a fleet of Amazon EC2 Spot Instances to transcode the file format. The company needs to scale throughput when the company uploads data from the on-premises data center to Amazon S3 and when Ihe company downloads data from Amazon S3 to the EC2 instances.

gUkicn solutions will meet these requirements? (Select TWO.)

A.
Use the S3 bucket access point instead of accessing the S3 bucket directly.
A.
Use the S3 bucket access point instead of accessing the S3 bucket directly.
Answers
B.
Upload the files into multiple S3 buckets.
B.
Upload the files into multiple S3 buckets.
Answers
C.
Use S3 multipart uploads.
C.
Use S3 multipart uploads.
Answers
D.
Fetch multiple byte-ranges of an object in parallel. fe
D.
Fetch multiple byte-ranges of an object in parallel. fe
Answers
E.
Add a random prefix to each object when uploading the files.
E.
Add a random prefix to each object when uploading the files.
Answers
Suggested answer: C, D

Explanation:

Requirement Analysis: The company needs to scale throughput for uploading large files to S3 and downloading them to EC2 instances.

S3 Multipart Uploads: This method allows for the parallel upload of parts of a file, improving upload efficiency and reliability.

Parallel Fetching: Fetching multiple byte-ranges in parallel from S3 improves download performance.

Implementation:

For uploads, use the S3 multipart upload API to upload files in parallel.

For downloads, use the S3 API to request multiple byte-ranges concurrently.

Conclusion: These solutions effectively scale throughput and improve the performance of both uploads and downloads.

Reference

S3 Multipart Upload: Amazon S3 Multipart Upload

Parallel Fetching: S3 Byte-Range Fetches

A company is creating a prototype of an ecommerce website on AWS. The website consists of an Application Load Balancer, an Auto Scaling group of Amazon EC2 instances for web servers, and an Amazon RDS for MySQL DB instance that runs with the Single-AZ configuration.

The website is slow to respond during searches of the product catalog. The product catalog is a group of tables in the MySQL database that the company does not ate frequently. A solutions architect has determined that the CPU utilization on the DB instance is high when product catalog searches occur.

What should the solutions architect recommend to improve the performance of the website during searches of the product catalog?

A.
Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables.
A.
Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables.
Answers
B.
Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache.
B.
Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache.
Answers
C.
Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow.
C.
Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow.
Answers
D.
Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database.
D.
Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database.
Answers
Suggested answer: B

Explanation:

Requirement Analysis: The product catalog search is causing high CPU utilization on the MySQL DB instance, slowing down the website.

ElastiCache Overview: Amazon ElastiCache for Redis can be used to cache frequently accessed data, reducing load on the database.

Lazy Loading: This caching strategy loads data into the cache only when it is requested, improving response times for repeated queries.

Implementation:

Set up an ElastiCache for Redis cluster.

Modify the application to check the cache before querying the database.

Use lazy loading to populate the cache on cache misses.

Conclusion: This approach reduces database load and improves website performance during product catalog searches.

Reference

Amazon ElastiCache: ElastiCache Documentation

Caching Strategies: ElastiCache Caching Strategies

A media company has a multi-account AWS environment in the us-east-1 Region. The company has an Amazon Simple Notification Service {Amazon SNS) topic in a production account that publishes performance metrics. The company has an AWS Lambda function in an administrator account to process and analyze log data.

The Lambda function that is in the administrator account must be invoked by messages from the SNS topic that is in the production account when significant metrics tM* reported.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Create an IAM resource policy for the Lambda function that allows Amazon SNS to invoke the function. Implement an Amazon Simple Queue Service (Amazon SQS) queue in the administrator account to buffer messages from the SNS topic that is in the production account. Configure the SOS queue to invoke the Lambda function.
A.
Create an IAM resource policy for the Lambda function that allows Amazon SNS to invoke the function. Implement an Amazon Simple Queue Service (Amazon SQS) queue in the administrator account to buffer messages from the SNS topic that is in the production account. Configure the SOS queue to invoke the Lambda function.
Answers
B.
Create an IAM policy for the SNS topic that allows the Lambda function to subscribe to the topic.
B.
Create an IAM policy for the SNS topic that allows the Lambda function to subscribe to the topic.
Answers
C.
Use an Amazon EventBridge rule in the production account to capture the SNS topic notifications. Configure the EventBridge rule to forward notifications to the Lambda function that is in the administrator account.
C.
Use an Amazon EventBridge rule in the production account to capture the SNS topic notifications. Configure the EventBridge rule to forward notifications to the Lambda function that is in the administrator account.
Answers
D.
Store performance metrics in an Amazon S3 bucket in the production account. Use Amazon Athena to analyze the metrics from the administrator account.
D.
Store performance metrics in an Amazon S3 bucket in the production account. Use Amazon Athena to analyze the metrics from the administrator account.
Answers
Suggested answer: A, B

Explanation:

Requirement Analysis: The Lambda function in the administrator account needs to process messages from an SNS topic in the production account.

IAM Policy for SNS Topic: Allows the Lambda function to subscribe and be invoked by the SNS topic.

SQS Queue for Buffering: Using an SQS queue provides reliable message delivery and buffering between SNS and Lambda, ensuring all messages are processed.

Implementation:

Create an SQS queue in the administrator account.

Set an IAM policy to allow the Lambda function to subscribe to and be invoked by the SNS topic.

Configure the SNS topic to send messages to the SQS queue.

Set up the SQS queue to trigger the Lambda function.

Conclusion: This solution ensures reliable message delivery and processing with appropriate permissions.

Reference

Amazon SNS: Amazon SNS Documentation

Amazon SQS: Amazon SQS Documentation

AWS Lambda: AWS Lambda Documentation

A company has an application that is running on Amazon EC2 instances A solutions architect has standardized the company on a particular instance family and various instance sizes based on the current needs of the company.

The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance family and sizes in the next 6 months based on application popularity and usage

Which solution will meet these requirements MOST cost-effectively?

A.
Compute Savings Plan
A.
Compute Savings Plan
Answers
B.
EC2 Instance Savings Plan
B.
EC2 Instance Savings Plan
Answers
C.
Zonal Reserved Instances
C.
Zonal Reserved Instances
Answers
D.
Standard Reserved Instances
D.
Standard Reserved Instances
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company wants to maximize cost savings for their application over the next three years, with the flexibility to change the instance family and sizes within the next six months based on application popularity and usage.

Analysis of Options:

Compute Savings Plan: This plan offers the most flexibility, allowing the company to change instance families, sizes, and regions. It applies to EC2, AWS Fargate, and AWS Lambda, offering significant cost savings with this flexibility.

EC2 Instance Savings Plan: This plan is less flexible than the Compute Savings Plan, as it only applies to EC2 instances and allows changes within a specific instance family.

Zonal Reserved Instances: These provide a discount on EC2 instances but are tied to a specific availability zone and instance type, offering the least flexibility.

Standard Reserved Instances: These offer discounts on EC2 instances but with more restrictions compared to Savings Plans, particularly when changing instance types and families.

Best Option for Flexibility and Savings:

The Compute Savings Plan is the most cost-effective solution because it allows the company to maintain flexibility while still achieving significant cost savings. This is critical for adapting to changing application demands without being locked into specific instance types or families.

AWS Savings Plans

EC2 Instance Types

Total 886 questions
Go to page: of 89