ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 50

Question list
Search
Search

List of questions

Search

Related questions











A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS) volume that is mounted inside the EC2 instance.

Which combination of actions should a solutions architect take to improve the performance and resilience of the website? (Select TWO.)

A.
Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance.
A.
Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance.
Answers
B.
Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
B.
Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
Answers
C.
Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
C.
Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
Answers
D.
Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the website.
D.
Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the website.
Answers
E.
Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
E.
Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
Answers
Suggested answer: C, E

Explanation:

Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS provides a scalable and fully managed file storage solution that can be accessed concurrently from multiple EC2 instances. This ensures that the website images can be accessed efficiently and consistently by all instances, improving performance In Option E The Auto Scaling group maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy instances. Additionally, configuring an Amazon CloudFront distribution for the website further improves performance by caching content at edge locations closer to the end-users, reducing latency and improving content delivery. Hence combining these actions, the website's performance is improved through efficient image storage and content delivery

A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues

Which solution will resolve this issue with the LEAST administrative overhead?

A.
Scale out the nodes by tracking the memory usage
A.
Scale out the nodes by tracking the memory usage
Answers
B.
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
B.
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Answers
C.
Use an AWS Lambda function to resize the EKS cluster automatically.
C.
Use an AWS Lambda function to resize the EKS cluster automatically.
Answers
D.
Use an Amazon EC2 Auto Scaling group to distribute the workload.
D.
Use an Amazon EC2 Auto Scaling group to distribute the workload.
Answers
Suggested answer: B

Explanation:

The Kubernetes Cluster Autoscaler is a component that automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes.It uses Auto Scaling groups to scale up or down the nodes according to the demand and capacity of your cluster1.

By using the Kubernetes Cluster Autoscaler in your Amazon EKS cluster, you can achieve the following benefits:

You can improve the performance and availability of your container applications by ensuring that there are enough nodes to run your pods and that there are no idle nodes wasting resources.

You can reduce the administrative overhead of managing your cluster size manually or using custom scripts. The Cluster Autoscaler handles the scaling decisions and actions for you based on the metrics and events from your cluster.

You can leverage the integration of Amazon EKS and AWS Auto Scaling to optimize the cost and efficiency of your cluster.You can use features such as launch templates, mixed instances policies, and spot instances to customize your node configuration and save up to 90% on compute costs2

A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for scalability. The company must configure the Amazon EKS control plane with endpoint private access set to true and endpoint public access set to false to maintain security compliance The company must also put the data plane in private subnets. However, the company has received error notifications because the node cannot join the cluster.

Which solution will allow the node to join the cluster?

A.
Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole IAM role.
A.
Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole IAM role.
Answers
B.
Create interface VPC endpoints to allow nodes to access the control plane.
B.
Create interface VPC endpoints to allow nodes to access the control plane.
Answers
C.
Recreate nodes in the public subnet Restrict security groups for EC2 nodes
C.
Recreate nodes in the public subnet Restrict security groups for EC2 nodes
Answers
D.
Allow outbound traffic in the security group of the nodes.
D.
Allow outbound traffic in the security group of the nodes.
Answers
Suggested answer: B

Explanation:

Kubernetes API requests within your cluster's VPC (such as node to control plane communication) use the private VPC endpoint. https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

A company has a small Python application that processes JSON documents and outputs the results to an on-premises SQL database. The application runs thousands of times each day. The company wants to move the application to the AWS Cloud. The company needs a highly available solution that maximizes scalability and minimizes operational overhead.

Which solution will meet these requirements?

A.
Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents. Store the results in an Amazon Aurora DB cluster
A.
Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents. Store the results in an Amazon Aurora DB cluster
Answers
B.
Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.
B.
Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.
Answers
C.
Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS DB instance.
C.
Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS DB instance.
Answers
D.
Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages Deploy the Python code as a container on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to process the SQS messages. Store the results on an Amazon RDS DB instance.
D.
Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages Deploy the Python code as a container on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to process the SQS messages. Store the results on an Amazon RDS DB instance.
Answers
Suggested answer: B

Explanation:

By placing the JSON documents in an S3 bucket, the documents will be stored in a highly durable and scalable object storage service. The use of AWS Lambda allows the company to run their Python code to process the documents as they arrive in the S3 bucket without having to worry about the underlying infrastructure. This also allows for horizontal scalability, as AWS Lambda will automatically scale the number of instances of the function based on the incoming rate of requests. The results can be stored in an Amazon Aurora DB cluster, which is a fully-managed, high-performance database service that is compatible with MySQL and PostgreSQL. This will provide the necessary durability and scalability for the results of the processing.

https://aws.amazon.com/rds/aurora/

A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than 10%. A solutions architect must recommend a solution that will reduce the cost without compromising security.

Which solution will meet these requirements?

A.
Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS account.
A.
Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS account.
Answers
B.
Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
B.
Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
Answers
C.
Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with another AWS account.
C.
Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with another AWS account.
Answers
D.
Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.
D.
Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.
Answers
Suggested answer: D

Explanation:

company need to setup a cheaper connection (200 M) but B is incorrect because you can only order port speeds of 1, 10, or 100 Gbps for more flexibility you can go with hosted connection, You can order port speeds between 50 Mbps and 10 Gbps. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html

A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company manually backs up the workloads to create an image as needed.

In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances.

Which solutions will meet these requirements with the LEAST administrative effort? (Select TWO.)

A.
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Copy the image on demand.
A.
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Copy the image on demand.
Answers
B.
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
B.
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
Answers
C.
Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
C.
Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
Answers
D.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
D.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
Answers
E.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
E.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
Answers
Suggested answer: B, D

Explanation:

Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the backup process. By configuring the policy to run twice daily and specifying the copy to the us-west-2 Region, the company can ensure regular backups are created and copied to the alternate region. Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup plan based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run twice daily, and the destination for the copy can be defined as the us-west-2 Region.

Both options automate the backup process and include copying the backups to the us-west-2 Region, ensuring data resilience in the event of a disaster. These solutions minimize administrative effort by leveraging automated backup and copy mechanisms provided by AWS services.

A serverless application uses Amazon API Gateway. AWS Lambda, and Amazon DynamoDB. The Lambda function needs permissions to read and write to the DynamoDB table.

Which solution will give the Lambda function access to the DynamoDB table MOST securely?

A.
Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other AWS users do not have read and write access to the Lambda function configuration
A.
Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other AWS users do not have read and write access to the Lambda function configuration
Answers
B.
Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
B.
Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
Answers
C.
Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
C.
Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
Answers
D.
Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
D.
Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
Answers
Suggested answer: B

Explanation:

Option B suggests creating an IAM role that includes Lambda as a trusted service, meaning the role is specifically designed for Lambda functions. The role should have a policy attached to it that grants the required read and write access to the DynamoDB table.

A company operates a two-tier application for image processing. The application uses two Availability Zones, each with one public subnet and one private subnet. An Application Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the application tier use the private subnets.

Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is receiving millions of illegitimate requests from a small number of IP addresses. A solutions architect needs to resolve the immediate performance problem while the company investigates a more permanent solution.

What should the solutions architect recommend to meet this requirement?

A.
Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
A.
Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
Answers
B.
Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources
B.
Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources
Answers
C.
Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
C.
Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
Answers
D.
Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources
D.
Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources
Answers
Suggested answer: B

Explanation:

Deny the request from the first entry at the public subnet, dont allow it to cross and get to the private subnet.

In this scenario, the security audit reveals that the application is receiving millions of illegitimate requests from a small number of IP addresses. To address this issue, it is recommended to modify the network ACL (Access Control List) for the web tier subnets. By adding an inbound deny rule specifically targeting the IP addresses that are consuming resources, the network ACL can block the illegitimate traffic at the subnet level before it reaches the web servers. This will help alleviate the excessive load on the web tier and improve the application's performance.

An loT company is releasing a mattress that has sensors to collect data about a user's sleep. The sensors will send data to an Amazon S3 bucket. The sensors collect approximately 2 MB of data every night for each mattress. The company must process and summarize the data for each mattress. The results need to be available as soon as possible Data processing will require 1 GB of memory and will finish within 30 seconds.

Which solution will meet these requirements MOST cost-effectively?

A.
Use AWS Glue with a Scalajob.
A.
Use AWS Glue with a Scalajob.
Answers
B.
Use Amazon EMR with an Apache Spark script.
B.
Use Amazon EMR with an Apache Spark script.
Answers
C.
Use AWS Lambda with a Python script.
C.
Use AWS Lambda with a Python script.
Answers
D.
Use AWS Glue with a PySpark job.
D.
Use AWS Glue with a PySpark job.
Answers
Suggested answer: C

Explanation:

AWS Lambda charges you based on the number of invocations and the execution time of your function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-effective choice. You only pay for the actual usage without the need to provision and maintain infrastructure.

A company wants lo build a web application on AWS. Client access requests to the website are not predictable and can be idle for a long time. Only customers who have paid a subscription fee can have the ability to sign in and use the web application.

Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)

A.
Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
A.
Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
Answers
B.
Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
B.
Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
Answers
C.
Create an Amazon Cogmto user pool to authenticate users
C.
Create an Amazon Cogmto user pool to authenticate users
Answers
D.
Create an Amazon Cognito identity pool to authenticate users.
D.
Create an Amazon Cognito identity pool to authenticate users.
Answers
E.
Use AWS Amplify to serve the frontend web content with HTML. CSS, and JS. Use an integrated Amazon CloudFront configuration.
E.
Use AWS Amplify to serve the frontend web content with HTML. CSS, and JS. Use an integrated Amazon CloudFront configuration.
Answers
F.
Use Amazon S3 static web hosting with PHP. CSS. and JS. Use Amazon CloudFront to serve the frontend web content.
F.
Use Amazon S3 static web hosting with PHP. CSS. and JS. Use Amazon CloudFront to serve the frontend web content.
Answers
Suggested answer: A, C, E

Explanation:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html

Total 886 questions
Go to page: of 89