ExamGecko
Home / Amazon / SAA-C03 / List of questions
Ask Question

Amazon SAA-C03 Practice Test - Questions Answers, Page 34

List of questions

Question 331

Report
Export
Collapse

A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls What should a solutions architect do to improve the security of data in transit to the web tier?

Configure a TLS listener and add the server certificate on the NLB
Configure a TLS listener and add the server certificate on the NLB
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Suggested answer: A
asked 16/09/2024
Sivakumar Duraimanickam
36 questions

Question 332

Report
Export
Collapse

A company runs a public three-Tier web application in a VPC The application runs on Amazon EC2 instances across multiple Availability Zones. The EC2 instances that run in private subnets need to communicate with a license server over the internet The company needs a managed solution that minimizes operational maintenance Which solution meets these requirements''

Provision a NAT instance in a public subnet Modify each private subnets route table with a default route that points to the NAT instance
Provision a NAT instance in a public subnet Modify each private subnets route table with a default route that points to the NAT instance
Provision a NAT instance in a private subnet Modify each private subnet's route table with a default route that points to the NAT instance
Provision a NAT instance in a private subnet Modify each private subnet's route table with a default route that points to the NAT instance
Provision a NAT gateway in a public subnet Modify each private subnet's route table with a default route that points to the NAT gateway
Provision a NAT gateway in a public subnet Modify each private subnet's route table with a default route that points to the NAT gateway
Provision a NAT gateway in a private subnet Modify each private subnet's route table with a default route that points to the NAT gateway .
Provision a NAT gateway in a private subnet Modify each private subnet's route table with a default route that points to the NAT gateway .
Suggested answer: C
asked 16/09/2024
Luis Gerardo Collazos Castro
38 questions

Question 333

Report
Export
Collapse

A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company's internet connection can support an upload speed of 100 Mbps.

Which solution meets these requirements MOST cost-effectively?

Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS
Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS
Create a VPN connection between the on-premises NAS system and the nearest AWS Region Transfer the data over the VPN connection
Create a VPN connection between the on-premises NAS system and the nearest AWS Region Transfer the data over the VPN connection
Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices Use the devices to transfer the data to Amazon S3.
Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices Use the devices to transfer the data to Amazon S3.
Set up a 10 Gbps AWS Direct Connect connection between the company location and (he nearest AWS Region Transfer the data over a VPN connection into the Region to store the data in Amazon S3
Set up a 10 Gbps AWS Direct Connect connection between the company location and (he nearest AWS Region Transfer the data over a VPN connection into the Region to store the data in Amazon S3
Suggested answer: C

Explanation:


asked 16/09/2024
Rok Nemec
31 questions

Question 334

Report
Export
Collapse

A company needs a backup strategy for its three-tier stateless web application The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events The database tier runs on Amazon RDS for PostgreSQL The web application does not require temporary local storage on the EC2 instances The company's recovery point objective (RPO) is 2 hours The backup strategy must maximize scalability and optimize resource utilization for this environment Which solution will meet these requirements?

Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO
Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO
Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
Suggested answer: D

Explanation:


asked 16/09/2024
Jim Apple
42 questions

Question 335

Report
Export
Collapse

A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to Kinesis Data Streams.

What should a solutions architect do to resolve this issue?

Update the Kinesis Data Streams default settings by modifying the data retention period.
Update the Kinesis Data Streams default settings by modifying the data retention period.
Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Suggested answer: A
asked 16/09/2024
Felix Maroto Roman
46 questions

Question 336

Report
Export
Collapse

A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns. Which solution will meet these requirements with the LEAST operational overhead?

Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
Suggested answer: A

Explanation:

AWS Batch is a fully managed service that enables users to run batch jobs on AWS. It can handle different types of tasks written in different languages and run them on EC2 instances. It also integrates with Amazon EventBridge (Amazon CloudWatch Events) to schedule jobs based on time or event triggers. This solution will meet the requirements of performance, scalability and low operational overhead12. b) Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs. This solution will not meet the requirement of low operational overhead, as it involves converting the EC2 instance to a container and using AWS App Runner, which is a service that automatically builds and deploys web applications and load balances traffic2. This is not necessary for running batch jobs. c) Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events). This solution will not meet the requirement of performance, as AWS Lambda has a limit of 15 minutes for execution time and 10 GB for memory allocation3. These limits may not be sufficient for running 1-hour tasks.

d) Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance. This solution will not meet the requirement of low operational overhead, as it involves creating and maintaining AMIs and Auto Scaling groups, which are additional resources that need to be configured and managed2. Reference URL: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute- services.html


asked 16/09/2024
Roman Flores
33 questions

Question 337

Report
Export
Collapse

A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code. When a natural disaster occurs tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is associated with it. The company wants a solution that is highly available and scalable during such events Which solution meets these requirements MOST cost-effectively?

Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance.
Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance.
Suggested answer: B

Explanation:

Amazon S3 is a highly scalable, durable, and cost-effective object storage service that can store millions of images1. Amazon DynamoDB is a fully managed NoSQL database that can handle high throughput and low latency for key-value and document data2. By using S3 to store the images and DynamoDB to store the geographic codes and image S3 URLs, the solution can achieve high availability and scalability during natural disasters. It can also leverage DynamoDB's features such as caching, auto-scaling, and global tables to improve performance and reduce costs2. a) Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle large volumes of unstructured data such as images efficiently3. It also involves higher licensing and operational costs than S3 and DynamoDB12.

c) Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load. This solution will not meet the requirement of cost- effectiveness, as storing images in DynamoDB will consume more storage space and incur higher charges than storing them in S312. It will also require additional configuration and management of DAX clusters to handle high load. d) Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle high throughput and low latency for key-value data such as geographic codes efficiently3. It also involves higher licensing and operational costs than DynamoDB2. Reference URL: https://dynobase.dev/dynamodb-vs-s3/


asked 16/09/2024
Jeremy Cheeseborough
41 questions

Question 338

Report
Export
Collapse

A research laboratory needs to process approximately 8 TB of data The laboratory requires submillisecond latencies and a minimum throughput of 6 GBps for the storage subsystem Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data Which solution will meet the performance requirements?

Create an Amazon FSx for NetApp ONTAP file system Set each volume's tiering policy to ALL Import the raw data into the file system Mount the file system on the EC2 instances
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tiering policy to ALL Import the raw data into the file system Mount the file system on the EC2 instances
Create an Amazon S3 bucket to stofe the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Create an Amazon S3 bucket to stofe the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent HDD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent HDD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tienng policy to NONE.Import the raw data into the file system Mount the file system on the EC2 instances
Create an Amazon FSx for NetApp ONTAP file system Set each volume's tienng policy to NONE.Import the raw data into the file system Mount the file system on the EC2 instances
Suggested answer: B

Explanation:

Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for submillisecond latencies and up to 6 GBps throughput, and can import data from and export data to Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file system is stopped.

asked 16/09/2024
Danilo Nogueira
37 questions

Question 339

Report
Export
Collapse

A company has implemented a self-managed DNS service on AWS. The solution consists of the following:

• Amazon EC2 instances in different AWS Regions

• Endpomts of a standard accelerator m AWS Global Accelerator

The company wants to protect the solution against DDoS attacks What should a solutions architect do to meet this requirement?

Subscribe to AWS Shield Advanced Add the accelerator as a resource to protect
Subscribe to AWS Shield Advanced Add the accelerator as a resource to protect
Subscribe to AWS Shield Advanced Add the EC2 instances as resources to protect
Subscribe to AWS Shield Advanced Add the EC2 instances as resources to protect
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the accelerator
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the accelerator
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the EC2 instances
Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the EC2 instances
Suggested answer: A

Explanation:

AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53. https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html


asked 16/09/2024
Alex Luna
41 questions

Question 340

Report
Export
Collapse

A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size vanes between 2 GB and 5 GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accesstole for up to 3 months. The company needs the most cost-effective solution that will not increase retrieval time Which S3 storage class should the company use to meet these requirements?

S3 Intelltgent-Tienng
S3 Intelltgent-Tienng
S3 Glacier Instant Retrieval
S3 Glacier Instant Retrieval
S3 Standard
S3 Standard
S3 Standard-Infrequent Access (S3 Standard-IA)
S3 Standard-Infrequent Access (S3 Standard-IA)
Suggested answer: D

Explanation:

S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may not meet the requirement of "immediately available" data. On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3 months.

asked 16/09/2024
Mark Singer
42 questions
Total 1.002 questions
Go to page: of 101
Search

Related questions