ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 41

Question list
Search
Search

List of questions

Search

Related questions











The customers of a finance company request appointments with financial advisors by sending text messages. A web application that runs on Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS) queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting confirmation email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB database. As the company expands, customers report that their meeting invitations are taking longer to arrive.

What should a solutions architect recommend to resolve this issue?

A.
Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.
A.
Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.
Answers
B.
Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.
B.
Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.
Answers
C.
Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.
C.
Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.
Answers
D.
Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth of the SQS queue.
D.
Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth of the SQS queue.
Answers
Suggested answer: D

Explanation:

To resolve the issue of longer delivery times for meeting invitations, the solutions architect can recommend adding an Auto Scaling group for the application that sends meeting invitations and configuring the Auto Scaling group to scale based on the depth of the SQS queue. This will allow the application to scale up as the number of appointment requests increases, improving the performance and delivery times of the meeting invitations.


An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.

The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead. Which solution will meet these requirements?

A.
Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
A.
Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
Answers
B.
Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3.Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.
B.
Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3.Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.
Answers
C.
Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.
C.
Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.
Answers
D.
Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.
D.
Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lakeformation/

A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls What should a solutions architect do to improve the security of data in transit to the web tier?

A.
Configure a TLS listener and add the server certificate on the NLB
A.
Configure a TLS listener and add the server certificate on the NLB
Answers
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Answers
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Answers
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Answers
Suggested answer: A

An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (Pll). The company wants to use the data in three applications. Only one of the applications needs to process the Pll. The Pll must be removed before the other two applications process the data. Which solution will meet these requirements with the LEAST operational overhead?

A.
Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.
A.
Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.
Answers
B.
Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.
B.
Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.
Answers
C.
Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.
C.
Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.
Answers
D.
Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.
D.
Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-toprocess-data-as-it-is-being-retrieved-from-s3/ S3 Object Lambda is a new feature of Amazon S3 that enables customers to add their own code to process data retrieved from S3 before returning it to the application. By using S3 Object Lambda, the data can be processed and transformed in real-time, without the need to store multiple copies of the data in separate S3 buckets or DynamoDB tables. In this case, the Pll can be removed from the data by the code added to S3 Object Lambda before returning the data to the two applications that do not need to process Pll. The one application that requires Pll can be pointed to the original S3 bucket where the Pll is still stored. Using S3 Object Lambda is the simplest and most cost-effective solution, as it eliminates the need to maintain multiple copies of the same data in different buckets or tables, which can result in additional storage costs and operational overhead.

A company uses Amazon API Gateway to run a private gateway with two REST APIs in the same VPC.

The BuyStock RESTful web service calls the CheckFunds RESTful web service to ensure that enough funds are available before a stock can be purchased. The company has noticed in the VPC flow logs that the BuyStock RESTful web service calls the CheckFunds RESTful web service over the internet instead of through the VPC. A solutions architect must implement a solution so that the APIs communicate through the VPC.

Which solution will meet these requirements with the FEWEST changes to the code?

(Select Correct Option/s and give detailed explanation from AWS Certified Solutions Architect Associate (SAA-C03) Study Manual or documents)

A.
Add an X-APl-Key header in the HTTP header for authorization.
A.
Add an X-APl-Key header in the HTTP header for authorization.
Answers
B.
Use an interface endpoint.
B.
Use an interface endpoint.
Answers
C.
Use a gateway endpoint.
C.
Use a gateway endpoint.
Answers
D.
D.
Answers
Suggested answer: B

Explanation:

A. Add an X-APl-Key header in the HTTP header for authorization.

B. Use an interface endpoint.

C. Use a gateway endpoint.

D. Add an Amazon Simple Queue Service (Amazon SQS) queue between the two REST APIs.

Answer: B

Explanation:

Using an interface endpoint will allow the BuyStock RESTful web service and the CheckFunds RESTful web service to communicate through the VPC without any changes to the code. An interface endpoint creates an elastic network interface (ENI) in the customer's VPC, and then configures the route tables to route traffic from the APIs to the ENI. This will ensure that the two APIs will communicate through the VPC without any changes to the code.

A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed. Which solution will accomplish this goal with the LEAST operational overhead?

A.
Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
A.
Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
Answers
B.
Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
B.
Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
Answers
C.
Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.
C.
Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.
Answers
D.
Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.
D.
Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.
Answers
Suggested answer: A

Explanation:

S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed metrics and reports.

A company has multiple AWS accounts that use consolidated billing. The company runs several active high performance Amazon RDS for Oracle On-Demand DB instances for 90 days. The company's finance team has access to AWS Trusted Advisor in the consolidated billing account and all other AWS accounts.

The finance team needs to use the appropriate AWS account to access the Trusted Advisor check recommendations for RDS. The finance team must review the appropriate Trusted Advisor check to reduce RDS costs. Which combination of steps should the finance team take to meet these requirements? (Select TWO.)

A.
Use the Trusted Advisor recommendations from the account where the RDS instances are running.
A.
Use the Trusted Advisor recommendations from the account where the RDS instances are running.
Answers
B.
Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.
B.
Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.
Answers
C.
Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.
C.
Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.
Answers
D.
Review the Trusted Advisor check for Amazon RDS Idle DB Instances.
D.
Review the Trusted Advisor check for Amazon RDS Idle DB Instances.
Answers
E.
Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.
E.
Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.
Answers
Suggested answer: B, C

Explanation:

1. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time. The consolidated billing account has access to all the other AWS accounts that use consolidated billing. Using the Trusted Advisor recommendations from the consolidated billing account will allow the finance team to see all RDS instance checks for all accounts at the same time.

2. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.

The Trusted Advisor check for Amazon RDS Reserved Instance Optimization provides recommendations for purchasing reserved instances to reduce RDS costs. By reviewing this check, the finance team can identify which RDS instances can be converted to reserved instances to save costs.

A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.

Which solution will meet these requirements?

A.
Set up a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
A.
Set up a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
Answers
B.
Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
B.
Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
Answers
C.
Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
C.
Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
Answers
D.
Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
D.
Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
Answers
Suggested answer: B

Explanation:

The best solution for this situation is option B, setting up AWS Global Accelerator with UDP listeners and endpoint groups in each Region. AWS Global Accelerator is a networking service that improves the availability and performance of internet applications by routing user requests to the nearest AWS

Region [1]. It also improves the performance of UDP applications by providing faster, more reliable data transfers with lower latency and fewer packet losses. By setting up UDP listeners and endpoint groups in each Region, Global Accelerator will route traffic to the nearest Region for faster response times and a better user experience.

A company hosts a serverless application on AWS. The application uses Amazon API Gateway, AWS Lambda, and an Amazon RDS for PostgreSQL database. The company notices an increase in application errors that result from database connection timeouts during times Of peak traffic or unpredictable traffic. The company needs a solution that reduces the application failures with the least amount of change to the code. What should a solutions architect do to meet these requirements?

A.
Reduce the Lambda concurrency rate.
A.
Reduce the Lambda concurrency rate.
Answers
B.
Enable RDS Proxy on the RDS DB instance.
B.
Enable RDS Proxy on the RDS DB instance.
Answers
C.
Resize the RDS DB instance class to accept more connections.
C.
Resize the RDS DB instance class to accept more connections.
Answers
D.
Migrate the database to Amazon DynamoDB with on-demand scaling.
D.
Migrate the database to Amazon DynamoDB with on-demand scaling.
Answers
Suggested answer: B

Explanation:

Using RDS Proxy, you can handle unpredictable surges in database traffic. Otherwise, these surges might cause issues due to oversubscribing connections or creating new connections at a fast rate. RDS Proxy establishes a database connection pool and reuses connections in this pool. This approach avoids the memory and CPU overhead of opening a new database connection each time. To protect the database against oversubscription, you can control the number of database connections that are created. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html

A hospital needs to store patient records in an Amazon S3 bucket. The hospital's compliance team must ensure that all protected health information (PHI) is encrypted in transit and at rest. The compliance team must administer the encryption key for data at rest.

Which solution will meet these requirements?

A.
Create a public SSL/TLS certificate in AWS Certificate Manager (ACM). Associate the certificate with Amazon S3. Configure default encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
A.
Create a public SSL/TLS certificate in AWS Certificate Manager (ACM). Associate the certificate with Amazon S3. Configure default encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
Answers
B.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default encryption for each S3 bucket to use server-side encryption with S3 managed encryption keys (SSE-S3). Assign the compliance team to manage the SSE-S3 keys.
B.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default encryption for each S3 bucket to use server-side encryption with S3 managed encryption keys (SSE-S3). Assign the compliance team to manage the SSE-S3 keys.
Answers
C.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
C.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
Answers
D.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Use Amazon Macie to protect the sensitive data that is stored in Amazon S3. Assign the compliance team to manage Macie.
D.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Use Amazon Macie to protect the sensitive data that is stored in Amazon S3. Assign the compliance team to manage Macie.
Answers
Suggested answer: C

Explanation:

it allows the compliance team to manage the KMS keys used for server-side encryption, thereby providing the necessary control over the encryption keys. Additionally, the use of the "aws:SecureTransport" condition on the bucket policy ensures that all connections to the S3 bucket are encrypted in transit.


Total 886 questions
Go to page: of 89