ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 36

Question list
Search
Search

List of questions

Search

Related questions











A company is developing a new mobile app. The company must implement proper traffic filtering to protect its Application Load Balancer (ALB) against common application-level attacks, such as crosssite scripting or SQL injection. The company has minimal infrastructure and operational staff. The company needs to reduce its share of the responsibility in managing, updating, and securing servers for its AWS environment. What should a solutions architect recommend to meet these requirements?

A.
Configure AWS WAF rules and associate them with the ALB.
A.
Configure AWS WAF rules and associate them with the ALB.
Answers
B.
Deploy the application using Amazon S3 with public hosting enabled.
B.
Deploy the application using Amazon S3 with public hosting enabled.
Answers
C.
Deploy AWS Shield Advanced and add the ALB as a protected resource.
C.
Deploy AWS Shield Advanced and add the ALB as a protected resource.
Answers
D.
Create a new ALB that directs traffic to an Amazon EC2 instance running a third-party firewall, which then passes the traffic to the current ALB.
D.
Create a new ALB that directs traffic to an Amazon EC2 instance running a third-party firewall, which then passes the traffic to the current ALB.
Answers
Suggested answer: A

Explanation:

A solutions architect should recommend option A, which is to configure AWS WAF rules and associate them with the ALB. This will allow the company to apply traffic filtering at the application layer, which is necessary for protecting the ALB against common application-level attacks such as cross-site scripting or SQL injection. AWS WAF is a managed service that makes it easy to protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. The company can easily manage and update the rules to ensure the security of its application.

A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files will grow over time. The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in redundancy. Which solution meets these requirements?

A.
Amazon Elastic File System (Amazon EFS)
A.
Amazon Elastic File System (Amazon EFS)
Answers
B.
Amazon Elastic Block Store (Amazon EBS)
B.
Amazon Elastic Block Store (Amazon EBS)
Answers
C.
Amazon S3 Glacier Deep Archive
C.
Amazon S3 Glacier Deep Archive
Answers
D.
AWS Backup
D.
AWS Backup
Answers
Suggested answer: A

Explanation:

Amazon EFS provides a simple, scalable, fully managed file system that can be simultaneously accessed from multiple EC2 instances and provides built-in redundancy. It is optimized for multiple EC2 instances to access the same files, and it is designed to be highly available, durable, and secure.

It can scale up to petabytes of data and can handle thousands of concurrent connections, and is a cost-effective solution for storing and accessing large amounts of data.

An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance needs the ability to download monthly security updates from an outside vendor. What should a solutions architect do to meet these requirements?

A.
Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default route.
A.
Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default route.
Answers
B.
Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
B.
Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
Answers
C.
Create a NAT instance, and place it in the same subnet where the EC2 instance is located.Configure the private subnet route table to use the NAT instance as the default route.
C.
Create a NAT instance, and place it in the same subnet where the EC2 instance is located.Configure the private subnet route table to use the NAT instance as the default route.
Answers
D.
Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the internet gateway as the default route.
D.
Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the internet gateway as the default route.
Answers
Suggested answer: B

Explanation:

This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a private subnet. By creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the internet through the NAT gateway. And then, configure the private subnet route table to use the NAT gateway as the default route. This will ensure that all outbound traffic is directed through the NAT gateway, allowing the EC2 instance to access the internet while still maintaining the security of the private subnet.

A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server accessible from everywhere on port 443. Which combination of steps will accomplish this task? (Choose two.)

A.
Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
A.
Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
Answers
B.
Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
B.
Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
Answers
C.
Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
C.
Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
Answers
D.
Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
D.
Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
Answers
E.
Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
E.
Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
Answers
Suggested answer: A, C

Explanation:

The combination of steps that will accomplish the task of making the web server accessible from everywhere on port 443 is to create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0 (A) and to update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 (C). This will ensure that traffic to port 443 is allowed both at the security group level and at the network ACL level, which will make the web server accessible from everywhere on port 443.

An online learning company is migrating to the AWS Cloud. The company maintains its student records in a PostgreSQL database. The company needs a solution in which its data is available and online across multiple AWS Regions at all times.

Which solution will meet these requirements with the LEAST amount of operational overhead?

A.
Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2 instances.
A.
Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2 instances.
Answers
B.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ feature turned on.
B.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ feature turned on.
Answers
C.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region.
C.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region.
Answers
D.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots to be copied to another Region.
D.
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots to be copied to another Region.
Answers
Suggested answer: C

Explanation:

" online across multiple AWS Regions at all times". Currently only Read Replica supports crossregions , Multi-AZ does not support cross-region (it works only in same region) https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-supportmulti-az-deployments/

A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the ‘same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet these requirements?

A.
Increase the minimum capacity for the Auto Scaling group.
A.
Increase the minimum capacity for the Auto Scaling group.
Answers
B.
Increase the maximum capacity for the Auto Scaling group.
B.
Increase the maximum capacity for the Auto Scaling group.
Answers
C.
Configure scheduled scaling to scale up to the desired compute level.
C.
Configure scheduled scaling to scale up to the desired compute level.
Answers
D.
Change the scaling policy to add more EC2 instances during each scaling operation.
D.
Change the scaling policy to add more EC2 instances during each scaling operation.
Answers
Suggested answer: C

Explanation:

By configuring scheduled scaling, the solutions architect can set the Auto Scaling group to automatically scale up to the desired compute level at a specific time (IAM) when the batch job starts and then automatically scale down after the job is complete. This will allow the desired EC2 capacity to be reached quickly and also help in reducing the cost.

A company’s security team requests that network traffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then accessed intermittently. What should a solutions architect do to meet these requirements when configuring the logs?

A.
Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days
A.
Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days
Answers
B.
Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.
B.
Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.
Answers
C.
Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
C.
Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
Answers
D.
Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard- Infrequent Access (S3 Standard-IA) after 90 days.
D.
Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard- Infrequent Access (S3 Standard-IA) after 90 days.
Answers
Suggested answer: D

Explanation:

There's a table here that specifies that VPC Flow logs can go directly to S3. Does not need to go via CloudTrail and then to S3. Nor via CW. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resourcepolicy.html#AWS-logs-infrastructure-S3

A company has an API that receives real-time data from a fleet of monitoring devices. The API stores this data in an Amazon RDS DB instance for later analysis. The amount of data that the monitoring devices send to the API fluctuates. During periods of heavy traffic, the API often returns timeout errors.

After an inspection of the logs, the company determines that the database is not capable of processing the volume of write traffic that comes from the API. A solutions architect must minimize the number of connections to the database and must ensure that data is not lost during periods of heavy traffic.

Which solution will meet these requirements?

A.
Increase the size of the DB instance to an instance type that has more available memory.
A.
Increase the size of the DB instance to an instance type that has more available memory.
Answers
B.
Modify the DB instance to be a Multi-AZ DB instance. Configure the application to write to all active RDS DB instances.
B.
Modify the DB instance to be a Multi-AZ DB instance. Configure the application to write to all active RDS DB instances.
Answers
C.
Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue.Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database.
C.
Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue.Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database.
Answers
D.
Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic. Use an AWS Lambda function that Amazon SNS invokes to write data from the topic to the database.
D.
Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic. Use an AWS Lambda function that Amazon SNS invokes to write data from the topic to the database.
Answers
Suggested answer: C

Explanation:

Using Amazon SQS will help minimize the number of connections to the database, as the API will write data to a queue instead of directly to the database. Additionally, using an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database will help ensure that data is not lost during periods of heavy traffic, as the queue will serve as a buffer between the API and the database.

A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.

Which solution meets these requirements?

A.
Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
A.
Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
Answers
B.
Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
B.
Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
Answers
C.
Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
C.
Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
Answers
D.
Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
D.
Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
Answers
Suggested answer: A

Explanation:

AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest healthy endpoint and will help to improve the overall user experience.

A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
A.
Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
Answers
B.
Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR cluster.
B.
Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR cluster.
Answers
C.
Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data.
C.
Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data.
Answers
D.
Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data
D.
Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements with the least operational overhead as it uses Amazon Kinesis Data Firehose, which is a fully managed service that can automatically handle the data collection, data transformation, encryption, and data storage in near-real time. Kinesis Data Firehose can automatically store the data in Amazon S3 in Apache Parquet format for further processing. Additionally, it allows you to create an Amazon Kinesis Data Analytics application to analyze the data in near real-time, with no need to manage any infrastructure or invoke any Lambda function. This way you can process a large amount of data with the least operational overhead.

Total 886 questions
Go to page: of 89