ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 34

Question list
Search
Search

List of questions

Search

Related questions











A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls What should a solutions architect do to improve the security of data in transit to the web tier?

A.
Configure a TLS listener and add the server certificate on the NLB
A.
Configure a TLS listener and add the server certificate on the NLB
Answers
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Answers
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Answers
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Answers
Suggested answer: A

A company runs a public three-Tier web application in a VPC The application runs on Amazon EC2 instances across multiple Availability Zones. The EC2 instances that run in private subnets need to communicate with a license server over the internet The company needs a managed solution that minimizes operational maintenance Which solution meets these requirements''

A.
Provision a NAT instance in a public subnet Modify each private subnets route table with a default route that points to the NAT instance
A.
Provision a NAT instance in a public subnet Modify each private subnets route table with a default route that points to the NAT instance
Answers
B.
Provision a NAT instance in a private subnet Modify each private subnet's route table with a default route that points to the NAT instance
B.
Provision a NAT instance in a private subnet Modify each private subnet's route table with a default route that points to the NAT instance
Answers
C.
Provision a NAT gateway in a public subnet Modify each private subnet's route table with a default route that points to the NAT gateway
C.
Provision a NAT gateway in a public subnet Modify each private subnet's route table with a default route that points to the NAT gateway
Answers
D.
Provision a NAT gateway in a private subnet Modify each private subnet's route table with a default route that points to the NAT gateway .
D.
Provision a NAT gateway in a private subnet Modify each private subnet's route table with a default route that points to the NAT gateway .
Answers
Suggested answer: C

A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company's internet connection can support an upload speed of 100 Mbps.

Which solution meets these requirements MOST cost-effectively?

A.
Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS
A.
Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS
Answers
B.
Create a VPN connection between the on-premises NAS system and the nearest AWS Region Transfer the data over the VPN connection
B.
Create a VPN connection between the on-premises NAS system and the nearest AWS Region Transfer the data over the VPN connection
Answers
C.
Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices Use the devices to transfer the data to Amazon S3.
C.
Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices Use the devices to transfer the data to Amazon S3.
Answers
D.
Set up a 10 Gbps AWS Direct Connect connection between the company location and (he nearest AWS Region Transfer the data over a VPN connection into the Region to store the data in Amazon S3
D.
Set up a 10 Gbps AWS Direct Connect connection between the company location and (he nearest AWS Region Transfer the data over a VPN connection into the Region to store the data in Amazon S3
Answers
Suggested answer: C

Explanation:


A company needs a backup strategy for its three-tier stateless web application The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events The database tier runs on Amazon RDS for PostgreSQL The web application does not require temporary local storage on the EC2 instances The company's recovery point objective (RPO) is 2 hours The backup strategy must maximize scalability and optimize resource utilization for this environment Which solution will meet these requirements?

A.
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
A.
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
Answers
B.
Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO
B.
Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO
Answers
C.
Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
C.
Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Answers
D.
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
D.
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Answers
Suggested answer: D

Explanation:


A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to Kinesis Data Streams.

What should a solutions architect do to resolve this issue?

A.
Update the Kinesis Data Streams default settings by modifying the data retention period.
A.
Update the Kinesis Data Streams default settings by modifying the data retention period.
Answers
B.
Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
B.
Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
Answers
C.
Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
C.
Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
Answers
D.
Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
D.
Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Answers
Suggested answer: A

A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns. Which solution will meet these requirements with the LEAST operational overhead?

A.
Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
A.
Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
Answers
B.
Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
B.
Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
Answers
C.
Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
C.
Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
Answers
D.
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
D.
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
Answers
Suggested answer: A

Explanation:

AWS Batch is a fully managed service that enables users to run batch jobs on AWS. It can handle different types of tasks written in different languages and run them on EC2 instances. It also integrates with Amazon EventBridge (Amazon CloudWatch Events) to schedule jobs based on time or event triggers. This solution will meet the requirements of performance, scalability and low operational overhead12. b) Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs. This solution will not meet the requirement of low operational overhead, as it involves converting the EC2 instance to a container and using AWS App Runner, which is a service that automatically builds and deploys web applications and load balances traffic2. This is not necessary for running batch jobs. c) Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events). This solution will not meet the requirement of performance, as AWS Lambda has a limit of 15 minutes for execution time and 10 GB for memory allocation3. These limits may not be sufficient for running 1-hour tasks.

d) Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance. This solution will not meet the requirement of low operational overhead, as it involves creating and maintaining AMIs and Auto Scaling groups, which are additional resources that need to be configured and managed2. Reference URL: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute- services.html


A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code. When a natural disaster occurs tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is associated with it. The company wants a solution that is highly available and scalable during such events Which solution meets these requirements MOST cost-effectively?

A.
Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
A.
Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
Answers
B.
Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
B.
Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
Answers
C.
Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
C.
Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
Answers
D.
Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance.
D.
Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance.
Answers
Suggested answer: B

Explanation:

Amazon S3 is a highly scalable, durable, and cost-effective object storage service that can store millions of images1. Amazon DynamoDB is a fully managed NoSQL database that can handle high throughput and low latency for key-value and document data2. By using S3 to store the images and DynamoDB to store the geographic codes and image S3 URLs, the solution can achieve high availability and scalability during natural disasters. It can also leverage DynamoDB's features such as caching, auto-scaling, and global tables to improve performance and reduce costs2. a) Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle large volumes of unstructured data such as images efficiently3. It also involves higher licensing and operational costs than S3 and DynamoDB12.

c) Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load. This solution will not meet the requirement of cost- effectiveness, as storing images in DynamoDB will consume more storage space and incur higher charges than storing them in S312. It will also require additional configuration and management of DAX clusters to handle high load. d) Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle high throughput and low latency for key-value data such as geographic codes efficiently3. It also involves higher licensing and operational costs than DynamoDB2. Reference URL: https://dynobase.dev/dynamodb-vs-s3/


A research laboratory needs to process approximately 8 TB of data The laboratory requires submillisecond latencies and a minimum throughput of 6 GBps for the storage subsystem Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data Which solution will meet the performance requirements?

A.
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tiering policy to ALL Import the raw data into the file system Mount the file system on the EC2 instances
A.
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tiering policy to ALL Import the raw data into the file system Mount the file system on the EC2 instances
Answers
B.
Create an Amazon S3 bucket to stofe the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
B.
Create an Amazon S3 bucket to stofe the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Answers
C.
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent HDD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
C.
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent HDD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Answers
D.
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tienng policy to NONE.Import the raw data into the file system Mount the file system on the EC2 instances
D.
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tienng policy to NONE.Import the raw data into the file system Mount the file system on the EC2 instances
Answers
Suggested answer: B

Explanation:

Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for submillisecond latencies and up to 6 GBps throughput, and can import data from and export data to Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file system is stopped.

A company has implemented a self-managed DNS service on AWS. The solution consists of the following:

• Amazon EC2 instances in different AWS Regions

• Endpomts of a standard accelerator m AWS Global Accelerator

The company wants to protect the solution against DDoS attacks What should a solutions architect do to meet this requirement?

A.
Subscribe to AWS Shield Advanced Add the accelerator as a resource to protect
A.
Subscribe to AWS Shield Advanced Add the accelerator as a resource to protect
Answers
B.
Subscribe to AWS Shield Advanced Add the EC2 instances as resources to protect
B.
Subscribe to AWS Shield Advanced Add the EC2 instances as resources to protect
Answers
C.
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the accelerator
C.
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the accelerator
Answers
D.
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the EC2 instances
D.
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the EC2 instances
Answers
Suggested answer: A

Explanation:

AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53. https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html


A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size vanes between 2 GB and 5 GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accesstole for up to 3 months. The company needs the most cost-effective solution that will not increase retrieval time Which S3 storage class should the company use to meet these requirements?

A.
S3 Intelltgent-Tienng
A.
S3 Intelltgent-Tienng
Answers
B.
S3 Glacier Instant Retrieval
B.
S3 Glacier Instant Retrieval
Answers
C.
S3 Standard
C.
S3 Standard
Answers
D.
S3 Standard-Infrequent Access (S3 Standard-IA)
D.
S3 Standard-Infrequent Access (S3 Standard-IA)
Answers
Suggested answer: D

Explanation:

S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may not meet the requirement of "immediately available" data. On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3 months.

Total 886 questions
Go to page: of 89