Amazon SAA-C03 Practice Test - Questions Answers, Page 34
List of questions
Question 331
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls What should a solutions architect do to improve the security of data in transit to the web tier?
Question 332
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company runs a public three-Tier web application in a VPC The application runs on Amazon EC2 instances across multiple Availability Zones. The EC2 instances that run in private subnets need to communicate with a license server over the internet The company needs a managed solution that minimizes operational maintenance Which solution meets these requirements''
Question 333
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company's internet connection can support an upload speed of 100 Mbps.
Which solution meets these requirements MOST cost-effectively?
Explanation:
Question 334
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company needs a backup strategy for its three-tier stateless web application The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events The database tier runs on Amazon RDS for PostgreSQL The web application does not require temporary local storage on the EC2 instances The company's recovery point objective (RPO) is 2 hours The backup strategy must maximize scalability and optimize resource utilization for this environment Which solution will meet these requirements?
Explanation:
Question 335
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to Kinesis Data Streams.
What should a solutions architect do to resolve this issue?
Question 336
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns. Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
AWS Batch is a fully managed service that enables users to run batch jobs on AWS. It can handle different types of tasks written in different languages and run them on EC2 instances. It also integrates with Amazon EventBridge (Amazon CloudWatch Events) to schedule jobs based on time or event triggers. This solution will meet the requirements of performance, scalability and low operational overhead12. b) Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs. This solution will not meet the requirement of low operational overhead, as it involves converting the EC2 instance to a container and using AWS App Runner, which is a service that automatically builds and deploys web applications and load balances traffic2. This is not necessary for running batch jobs. c) Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events). This solution will not meet the requirement of performance, as AWS Lambda has a limit of 15 minutes for execution time and 10 GB for memory allocation3. These limits may not be sufficient for running 1-hour tasks.
d) Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance. This solution will not meet the requirement of low operational overhead, as it involves creating and maintaining AMIs and Auto Scaling groups, which are additional resources that need to be configured and managed2. Reference URL: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute- services.html
Question 337
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code. When a natural disaster occurs tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is associated with it. The company wants a solution that is highly available and scalable during such events Which solution meets these requirements MOST cost-effectively?
Explanation:
Amazon S3 is a highly scalable, durable, and cost-effective object storage service that can store millions of images1. Amazon DynamoDB is a fully managed NoSQL database that can handle high throughput and low latency for key-value and document data2. By using S3 to store the images and DynamoDB to store the geographic codes and image S3 URLs, the solution can achieve high availability and scalability during natural disasters. It can also leverage DynamoDB's features such as caching, auto-scaling, and global tables to improve performance and reduce costs2. a) Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle large volumes of unstructured data such as images efficiently3. It also involves higher licensing and operational costs than S3 and DynamoDB12.
c) Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load. This solution will not meet the requirement of cost- effectiveness, as storing images in DynamoDB will consume more storage space and incur higher charges than storing them in S312. It will also require additional configuration and management of DAX clusters to handle high load. d) Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance. This solution will not meet the requirement of scalability and cost-effectiveness, as Oracle is a relational database that may not handle high throughput and low latency for key-value data such as geographic codes efficiently3. It also involves higher licensing and operational costs than DynamoDB2. Reference URL: https://dynobase.dev/dynamodb-vs-s3/
Question 338
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A research laboratory needs to process approximately 8 TB of data The laboratory requires submillisecond latencies and a minimum throughput of 6 GBps for the storage subsystem Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data Which solution will meet the performance requirements?
Explanation:
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3 Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for submillisecond latencies and up to 6 GBps throughput, and can import data from and export data to Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file system is stopped.
Question 339
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has implemented a self-managed DNS service on AWS. The solution consists of the following:
• Amazon EC2 instances in different AWS Regions
• Endpomts of a standard accelerator m AWS Global Accelerator
The company wants to protect the solution against DDoS attacks What should a solutions architect do to meet this requirement?
Explanation:
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53. https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html
Question 340
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size vanes between 2 GB and 5 GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accesstole for up to 3 months. The company needs the most cost-effective solution that will not increase retrieval time Which S3 storage class should the company use to meet these requirements?
Explanation:
S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may not meet the requirement of "immediately available" data. On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3 months.
Question