ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 84

Question list
Search
Search

List of questions

Search

Related questions











A company runs its production workload on an Amazon Aurora MySQL DB cluster that includes six Aurora Replicas. The company wants near-real-time reporting queries from one of its departments to be automatically distributed across three of the Aurora Replicas. Those three replicas have a different compute and memory specification from the rest of the DB cluster.

Which solution meets these requirements?

A.
Create and use a custom endpoint for the workload.
A.
Create and use a custom endpoint for the workload.
Answers
B.
Create a three-node cluster clone and use the reader endpoint.
B.
Create a three-node cluster clone and use the reader endpoint.
Answers
C.
Use any of the instance endpoints for the selected three nodes.
C.
Use any of the instance endpoints for the selected three nodes.
Answers
D.
Use the reader endpoint to automatically distribute the read-only workload.
D.
Use the reader endpoint to automatically distribute the read-only workload.
Answers
Suggested answer: A

Explanation:

In Amazon Aurora, a custom endpoint is a feature that allows you to create a load-balanced endpoint that directs traffic to a specific set of instances in your Aurora DB cluster. This is particularly useful when you want to route traffic to a subset of instances that have different configurations or when you want to isolate specific workloads (e.g., reporting queries) to certain instances.

Custom Endpoint: The correct solution is to create a custom endpoint that includes the three Aurora Replicas that the department wants to use for near-real-time reporting. This custom endpoint will distribute the reporting queries only across the three selected replicas with the specified compute and memory configurations, ensuring that these queries do not affect the rest of the DB cluster.

Other Options:

Option B (Create a three-node cluster clone): This would create a separate cluster with its own resources, but it is not necessary and could incur additional costs. Also, it doesn't leverage the existing replicas.

Option C (Use any of the instance endpoints): This would involve manually managing connections to individual instances, which is not scalable or automatic.

Option D (Use the reader endpoint): The reader endpoint would distribute the read queries across all replicas in the cluster, not just the selected three. This would not meet the requirement to limit the reporting queries to only three specific replicas.

AWS

Reference:

Amazon Aurora Endpoints - Provides detailed information on the different types of endpoints available in Aurora, including custom endpoints.

Custom Endpoints in Amazon Aurora - Specific documentation on how to create and use custom endpoints to direct traffic to selected instances in an Aurora cluster.

A company recently launched a new application for its customers. The application runs on multiple Amazon EC2 instances across two Availability Zones. End users use TCP to communicate with the application.

The application must be highly available and must automatically scale as the number of users increases.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

A.

Add a Network Load Balancer in front of the EC2 instances.

A.

Add a Network Load Balancer in front of the EC2 instances.

Answers
B.

Configure an Auto Scaling group for the EC2 instances.

B.

Configure an Auto Scaling group for the EC2 instances.

Answers
C.

Add an Application Load Balancer in front of the EC2 instances.

C.

Add an Application Load Balancer in front of the EC2 instances.

Answers
D.

Manually add more EC2 instances for the application.

D.

Manually add more EC2 instances for the application.

Answers
E.

Add a Gateway Load Balancer in front of the EC2 instances.

E.

Add a Gateway Load Balancer in front of the EC2 instances.

Answers
Suggested answer: A, B

Explanation:

For an application requiring TCP communication and high availability:

Network Load Balancer (NLB) is the best choice for load balancing TCP traffic because it is designed for handling high-throughput, low-latency connections.

Auto Scaling group ensures that the application can automatically scale based on demand, adding or removing EC2 instances as needed, which is crucial for handling user growth.

Option C (Application Load Balancer): ALB is primarily for HTTP/HTTPS traffic, not ideal for TCP.

Option D (Manual scaling): Manually adding instances does not provide the automation or scalability required.

Option E (Gateway Load Balancer): GLB is used for third-party virtual appliances, not for direct application load balancing.

AWS

Reference:

Network Load Balancer

Auto Scaling Group

A company needs to set up a centralized solution to audit API calls to AWS for workloads that run on AWS services and non AWS services. The company must store logs of the audits for 7 years.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Set up a data lake in Amazon S3. Incorporate AWS CloudTrail logs and logs from non AWS services into the data lake. Use CloudTrail to store the logs for 7 years.

A.

Set up a data lake in Amazon S3. Incorporate AWS CloudTrail logs and logs from non AWS services into the data lake. Use CloudTrail to store the logs for 7 years.

Answers
B.

Configure custom integrations for AWS CloudTrail Lake to collect and store CloudTrail events from AWS services and non AWS services. Use CloudTrail to store the logs for 7 years.

B.

Configure custom integrations for AWS CloudTrail Lake to collect and store CloudTrail events from AWS services and non AWS services. Use CloudTrail to store the logs for 7 years.

Answers
C.

Enable AWS CloudTrail for AWS services. Ingest non AWS services into CloudTrail to store the logs for 7 years

C.

Enable AWS CloudTrail for AWS services. Ingest non AWS services into CloudTrail to store the logs for 7 years

Answers
D.

Create new Amazon CloudWatch Logs groups. Send the audit data from non AWS services to the CloudWatch Logs groups. Enable AWS CloudTrail for workloads that run on AWS. Use CloudTrail to store the logs for 7 years.

D.

Create new Amazon CloudWatch Logs groups. Send the audit data from non AWS services to the CloudWatch Logs groups. Enable AWS CloudTrail for workloads that run on AWS. Use CloudTrail to store the logs for 7 years.

Answers
Suggested answer: B

Explanation:

AWS CloudTrail Lake is a fully managed service that allows the collection, storage, and querying of CloudTrail events for both AWS and non-AWS services. CloudTrail Lake can be customized to collect logs from various sources, ensuring a centralized audit solution. It also supports long-term storage, so logs can be retained for 7 years, meeting the compliance requirement.

Option A (Data Lake): Setting up a data lake in S3 introduces unnecessary operational complexity compared to CloudTrail Lake.

Option C (Ingest non-AWS services into CloudTrail): CloudTrail Lake is better suited for this task with less operational overhead.

Option D (CloudWatch Logs): While CloudWatch can store logs, CloudTrail Lake is specifically designed for API auditing and storage.

AWS

Reference:

AWS CloudTrail Lake

A company needs to migrate a MySQL database from an on-premises data center to AWS within 2 weeks. The database is 180 TB in size. The company cannot partition the database.

The company wants to minimize downtime during the migration. The company's internet connection speed is 100 Mbps.

Which solution will meet these requirements?

A.

Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate the database to Amazon RDS for MySQL and replicate ongoing changes. Send the Snowball Edge device back to AWS to finish the migration. Continue to replicate ongoing changes.

A.

Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate the database to Amazon RDS for MySQL and replicate ongoing changes. Send the Snowball Edge device back to AWS to finish the migration. Continue to replicate ongoing changes.

Answers
B.

Establish an AWS Site-to-Site VPN connection between the data center and AWS. Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate the database to Amazon RDS tor MySQL and replicate ongoing changes.

B.

Establish an AWS Site-to-Site VPN connection between the data center and AWS. Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate the database to Amazon RDS tor MySQL and replicate ongoing changes.

Answers
C.

Establish a 10 Gbps dedicated AWS Direct Connect connection between the data center and AWS. Use AWS DataSync to replicate the database to Amazon S3. Create a script to import the data from Amazon S3 to a new Amazon RDS for MySQL database instance.

C.

Establish a 10 Gbps dedicated AWS Direct Connect connection between the data center and AWS. Use AWS DataSync to replicate the database to Amazon S3. Create a script to import the data from Amazon S3 to a new Amazon RDS for MySQL database instance.

Answers
D.

Use the company's existing internet connection. Use AWS DataSync to replicate the database to Amazon S3. Create a script to import the data from Amazon S3 to a new Amazon RDS for MySQL database instance.

D.

Use the company's existing internet connection. Use AWS DataSync to replicate the database to Amazon S3. Create a script to import the data from Amazon S3 to a new Amazon RDS for MySQL database instance.

Answers
Suggested answer: A

Explanation:

Given the large size (180 TB) of the database and the time constraint, AWS Snowball Edge Storage Optimized is the best solution. Snowball Edge allows for the physical transfer of large datasets to AWS efficiently without relying on slow internet connections. AWS DMS and SCT can be used to perform ongoing replication of any changes made during the migration, ensuring minimal downtime.

Option B (VPN): Using a 100 Mbps internet connection would take far too long to transfer 180 TB.

Option C (Direct Connect): Establishing a 10 Gbps Direct Connect link might not be feasible within the 2-week timeframe.

Option D (DataSync over internet): With the existing internet connection, DataSync would also take too long.

AWS

Reference:

AWS Snowball Edge

AWS DMS

A company has multiple Amazon RDS DB instances that run in a development AWS account. All the instances have tags to identify them as development resources. The company needs the development DB instances to run on a schedule only during business hours.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an Amazon CloudWatch alarm to identify RDS instances that need to be stopped Create an AWS Lambda function to start and stop the RDS instances.

A.

Create an Amazon CloudWatch alarm to identify RDS instances that need to be stopped Create an AWS Lambda function to start and stop the RDS instances.

Answers
B.

Create an AWS Trusted Advisor report to identify RDS instances to be started and stopped. Create an AWS Lambda function to start and stop the RDS instances.

B.

Create an AWS Trusted Advisor report to identify RDS instances to be started and stopped. Create an AWS Lambda function to start and stop the RDS instances.

Answers
C.

Create AWS Systems Manager State Manager associations to start and stop the RDS instances.

C.

Create AWS Systems Manager State Manager associations to start and stop the RDS instances.

Answers
D.

Create an Amazon EventBridge rule that invokes AWS Lambda functions to start and stop the RDS instances.

D.

Create an Amazon EventBridge rule that invokes AWS Lambda functions to start and stop the RDS instances.

Answers
Suggested answer: D

Explanation:

To run RDS instances only during business hours with the least operational overhead, you can use Amazon EventBridge to schedule events that invoke AWS Lambda functions. The Lambda functions can be configured to start and stop the RDS instances based on the specified schedule (business hours). EventBridge rules allow you to define recurring events easily, and Lambda functions provide a serverless way to manage RDS instance start and stop operations, reducing administrative overhead.

Option A: While CloudWatch alarms could be used, they are more suited for monitoring, and using Lambda with EventBridge is simpler.

Option B (Trusted Advisor): Trusted Advisor is not ideal for scheduling tasks.

Option C (Systems Manager): Systems Manager could also work, but EventBridge and Lambda offer a more streamlined and lower-overhead solution.

AWS

Reference:

Amazon EventBridge Scheduler

AWS Lambda

A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual postprocessing.

Which solution will meet these requirements?

A.

Use Amazon Elastic File System (Amazon EFS) as a shared fie system. Access the dataset from Amazon EFS.

A.

Use Amazon Elastic File System (Amazon EFS) as a shared fie system. Access the dataset from Amazon EFS.

Answers
B.

Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.

B.

Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.

Answers
C.

Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.

C.

Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.

Answers
D.

Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.

D.

Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.

Answers
Suggested answer: C

Explanation:

Amazon FSx for Lustre is the ideal solution for high-performance computing (HPC) workloads that require parallel access to a shared file system with low latency. FSx for Lustre is designed specifically to meet the needs of such workloads, offering sub-millisecond latencies, which makes it well-suited for the 1 ms latency requirement mentioned in the question.

Here is why FSx for Lustre is the best fit:

Parallel File System: FSx for Lustre is a parallel file system that can scale across hundreds of Amazon EC2 instances, providing high throughput and low-latency access to data. It is optimized for processing large datasets in parallel, which is essential for HPC workloads.

Low Latency: FSx for Lustre is capable of providing access latencies well within 1 ms, making it ideal for performance-sensitive workloads like HPC.

Seamless Integration with Amazon S3: FSx for Lustre can be linked to an Amazon S3 bucket. This integration allows data to be imported from S3 into FSx for Lustre before the workload begins and exported back to S3 after processing. This feature is crucial for manual postprocessing because it enables engineers to access the dataset in S3 after processing.

Performance: FSx for Lustre is built for workloads that require high performance, such as machine learning, analytics, media processing, and financial simulations, which are typical for HPC environments.

In contrast:

Amazon EFS (Option A): While EFS provides shared file storage and scales across multiple EC2 instances, it does not offer the same level of performance or sub-millisecond latencies as FSx for Lustre. EFS is more suited for general-purpose workloads, not high-performance computing.

Mounting S3 as a file system (Option B and D): S3 is object storage, not a file system designed for low-latency access and parallel processing. Mounting S3 buckets directly or using AWS Resource Access Manager to share the bucket would not meet the low-latency (1 ms) or performance requirements needed for HPC workloads.

Therefore, Amazon FSx for Lustre (Option C) is the most appropriate and verified solution for this scenario.

AWS

Reference:

Amazon FSx for Lustre

Best Practices for High Performance Computing (HPC)

Amazon FSx and Amazon S3 Integration

A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC.

A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for some requests.

What should the solutions architect do to resolve this issue?

A.

Disable session affinity (sticky sessions) on the ALB.

A.

Disable session affinity (sticky sessions) on the ALB.

Answers
B.

Replace the ALB with a Network Load Balancer.

B.

Replace the ALB with a Network Load Balancer.

Answers
C.

Increase the number of EC2 instances in each Availability Zone.

C.

Increase the number of EC2 instances in each Availability Zone.

Answers
D.

Adjust the frequency of the health checks on the ALB's target group.

D.

Adjust the frequency of the health checks on the ALB's target group.

Answers
Suggested answer: A

Explanation:

The issue described in the question, where incoming traffic seems to favor one EC2 instance, is often caused by session affinity (also known as sticky sessions) being enabled on the Application Load Balancer (ALB). When session affinity is enabled, the ALB routes requests from the same client to the same EC2 instance. This can cause an imbalance in traffic distribution, leading to performance bottlenecks on certain instances while others remain underutilized.

To resolve this issue, disabling session affinity ensures that the ALB distributes incoming traffic evenly across all EC2 instances, allowing better load distribution and reducing latency. The ALB will rely on its round-robin or least outstanding requests algorithm (depending on the configuration) to distribute traffic more evenly across instances.

Option B (Network Load Balancer): The NLB is designed for Layer 4 (TCP) traffic and low latency use cases, but it is not needed here as the problem is with load balancing logic at the application layer (Layer 7). The ALB is more appropriate for HTTP/HTTPS traffic.

Option C (Increase EC2 Instances): Adding more EC2 instances does not solve the root issue of uneven traffic distribution.

Option D (Health Check Frequency): Adjusting health check frequency won't address the imbalance caused by session affinity.

AWS

Reference:

Application Load Balancer Sticky Sessions

A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build a solution to analyze the performance of the web application with a granularity of no more than 2 minutes.

What should the solutions architect do to meet this requirement?

A.

Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.

A.

Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.

Answers
B.

Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.

B.

Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.

Answers
C.

Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.

C.

Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.

Answers
D.

Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.

D.

Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.

Answers
Suggested answer: B

Explanation:

To analyze the performance of the web application with granularity of no more than 2 minutes, enabling detailed monitoring on EC2 instances is the best solution. By default, CloudWatch provides metrics at a 5-minute interval. Enabling detailed monitoring allows you to collect metrics at 1-minute intervals, which will give you the level of granularity you need to analyze performance during peak traffic.

Amazon CloudWatch metrics can then be used to analyze CPU utilization, memory usage, disk I/O, and network throughput, among other performance-related metrics, at the desired granularity.

Option A: Sending CloudWatch logs to Redshift for analysis is unnecessary and overcomplicated for simple performance analysis, which can be done using CloudWatch metrics alone.

Option C: Fetching EC2 logs via Lambda adds complexity, and CloudWatch metrics already provide the required data for performance analysis.

Option D: Sending logs to S3 and using Redshift for analysis is also more complex than necessary for simple performance monitoring.

AWS

Reference:

Monitoring Amazon EC2 with CloudWatch

Amazon CloudWatch Detailed Monitoring

A company runs its production workload on Amazon EC2 instances with Amazon Elastic Block Store (Amazon EBS) volumes. A solutions architect needs to analyze the current EBS volume cost and to recommend optimizations. The recommendations need to include estimated monthly saving opportunities.

Which solution will meet these requirements?

A.

Use Amazon Inspector reporting to generate EBS volume recommendations for optimization.

A.

Use Amazon Inspector reporting to generate EBS volume recommendations for optimization.

Answers
B.

Use AWS Systems Manager reporting to determine EBS volume recommendations for optimization.

B.

Use AWS Systems Manager reporting to determine EBS volume recommendations for optimization.

Answers
C.

Use Amazon CloudWatch metrics reporting to determine EBS volume recommendations for optimization.

C.

Use Amazon CloudWatch metrics reporting to determine EBS volume recommendations for optimization.

Answers
D.

Use AWS Compute Optimizer to generate EBS volume recommendations for optimization.

D.

Use AWS Compute Optimizer to generate EBS volume recommendations for optimization.

Answers
Suggested answer: D

Explanation:

AWS Compute Optimizer provides detailed recommendations for optimizing Amazon EBS volumes, including insights into underutilized volumes, inefficient configurations, and potential cost savings. It analyzes usage patterns and provides recommendations based on historical data, making it the ideal tool for this use case.

Option A (Amazon Inspector): Amazon Inspector is for security assessments, not for cost optimization.

Option B (Systems Manager): Systems Manager does not specifically provide EBS optimization recommendations.

Option C (CloudWatch): CloudWatch metrics help monitor usage but do not offer optimization recommendations like Compute Optimizer.

AWS

Reference:

AWS Compute Optimizer

A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier.

The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost.

Which solution will meet these requirements?

A.

Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.

A.

Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.

Answers
B.

Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.

B.

Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.

Answers
C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.

Answers
D.

Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.

D.

Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.

Answers
Suggested answer: A

Explanation:

To process each form exactly once and ensure no data is lost, using an Amazon SQS FIFO (First-In-First-Out) queue is the most appropriate solution. SQS FIFO queues guarantee that messages are processed in the exact order they are sent and ensure that each message is processed exactly once. This ensures data consistency and reliability, both of which are crucial for processing user-submitted forms without data loss.

SQS acts as a buffer between the web application server and the worker tier, ensuring that submitted forms are stored reliably and forwarded to the worker tier for processing. This also decouples the application, improving its scalability and resilience.

Option B (API Gateway): API Gateway is better suited for API management rather than acting as a message queue for form processing.

Option C (SQS Standard Queue): While SQS Standard queues offer high throughput, they do not guarantee exactly-once processing or the strict ordering needed for this use case.

Option D (Step Functions): Step Functions are useful for orchestrating workflows but add unnecessary complexity for simple message queuing and form processing.

AWS

Reference:

Amazon SQS FIFO Queues

Total 886 questions
Go to page: of 89