ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 48

Question list
Search
Search

List of questions

Search

Related questions











A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment processing application. The company will run the application in its on-premises data center for compliance purposes.

A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company's operational team to build the application.

Which activities are the responsibility of the company's operational team? (Select THREE.)

A.
Providing resilient power and network connectivity to the Outposts racks
A.
Providing resilient power and network connectivity to the Outposts racks
Answers
B.
Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
B.
Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
Answers
C.
Physical security and access controls of the data center environment
C.
Physical security and access controls of the data center environment
Answers
D.
Availability of the Outposts infrastructure including the power supplies, servers, and network-ing equipment within the Outposts racks
D.
Availability of the Outposts infrastructure including the power supplies, servers, and network-ing equipment within the Outposts racks
Answers
E.
Physical maintenance of Outposts components
E.
Physical maintenance of Outposts components
Answers
F.
Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
F.
Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
Answers
Suggested answer: A, C, F

Explanation:

These answers are correct because they reflect the customer's responsibilities for using AWS Outposts as part of the solution. According to the AWS shared responsibility model, the customer is responsible for providing resilient power and network connectivity to the Outposts racks, ensuring physical security and access controls of the data center environment, and providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events. AWS is responsible for managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts, as well as the availability of the Outposts infrastructure including the power supplies, servers, and networking equipment within the Outposts racks, and the physical maintenance of Outposts components.

https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html

https://www.contino.io/insights/the-sandwich-responsibility-model-aws-outposts/

A company is developing a new machine learning (ML) model solution on AWS. The models are developed as independent microservices that fetch approximately 1 GB of model data from Amazon $3 at startup and load the data into memory. Users access the models through an asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent.

The company provides models to hundreds of users. The usage patterns for the models are irregular. Some models could be unused for days or weeks. Other models could receive batches of thousands of requests at a time.

Which design should a solutions architect recommend to meet these requirements?

A.
Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda functions that are invoked by the NLB.
A.
Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda functions that are invoked by the NLB.
Answers
B.
Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS cluster based on the SQS queue size.
B.
Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS cluster based on the SQS queue size.
Answers
C.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as AWS Lambda functions that are invoked by SQS events. Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue size.
C.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as AWS Lambda functions that are invoked by SQS events. Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue size.
Answers
D.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the service based on the queue size.
D.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the service based on the queue size.
Answers
Suggested answer: D

Explanation:

This answer is correct because it meets the requirements of running the ML models as independent microservices that can handle irregular and unpredictable usage patterns. By directing the requests from the API into an Amazon SQS queue, the company can decouple the request processing from the model execution, and ensure that no requests are lost due to spikes in demand. By deploying the models as Amazon ECS services that read from the queue, the company can leverage containers to isolate and package each model as a microservice, and fetch the model data from S3 at startup. By enabling AWS Auto Scaling on Amazon ECS for both the cluster and copies of the service based on the queue size, the company can automatically scale up or down the number of EC2 instances in the cluster and the number of tasks in each service to match the demand and optimize performance.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-ecs.html

A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.

What should a solutions architect recommend?

A.
Deploy Amazon Inspector and associate it with the ALB.
A.
Deploy Amazon Inspector and associate it with the ALB.
Answers
B.
Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
B.
Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
Answers
C.
Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
C.
Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
Answers
D.
Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
D.
Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Answers
Suggested answer: B

Explanation:

This answer is correct because it meets the requirements of blocking the illegitimate incoming requests in a way that has a minimal impact on legitimate users. AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can associate AWS WAF with an ALB to protect the web application from malicious requests. You can configure a rate-limiting rule in AWS WAF to track the rate of requests for each originating IP address and block requests from an IP address that exceeds a certain limit within a five-minute period. This way, you can mitigate potential DDoS attacks and improve the performance of your website.

https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html

A company wants to use artificial intelligence (Al) to determine the quality of its customer service calls. The company currently manages calls in four different languages, including English. The company will offer new languages in the future. The company does not have the resources to regularly maintain machine learning (ML) models.

The company needs to create written sentiment analysis reports from the customer service call recordings. The customer service call recording text must be translated into English.

Which combination of steps will meet these requirements? (Select THREE.)

A.
Use Amazon Comprehend to translate the audio recordings into English.
A.
Use Amazon Comprehend to translate the audio recordings into English.
Answers
B.
Use Amazon Lex to create the written sentiment analysis reports.
B.
Use Amazon Lex to create the written sentiment analysis reports.
Answers
C.
Use Amazon Polly to convert the audio recordings into text.
C.
Use Amazon Polly to convert the audio recordings into text.
Answers
D.
Use Amazon Transcribe to convert the audio recordings in any language into text.
D.
Use Amazon Transcribe to convert the audio recordings in any language into text.
Answers
E.
Use Amazon Translate to translate text in any language to English.
E.
Use Amazon Translate to translate text in any language to English.
Answers
F.
Use Amazon Comprehend to create the sentiment analysis reports.
F.
Use Amazon Comprehend to create the sentiment analysis reports.
Answers
Suggested answer: D, E, F

Explanation:

These answers are correct because they meet the requirements of creating written sentiment analysis reports from the customer service call recordings in any language and translating them into English. Amazon Transcribe is a service that uses advanced machine learning technologies to recognize speech in audio files and transcribe them into text. You can use Amazon Transcribe to convert the audio recordings in any language into text, and specify the language code of the source audio. Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. You can use Amazon Translate to translate text in any language to English, and specify the source and target language codes. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. You can use Amazon Comprehend to create the sentiment analysis reports, which determine if the text is positive, negative, neutral, or mixed.

https://docs.aws.amazon.com/transcribe/latest/dg/what-is-transcribe.html

https://docs.aws.amazon.com/translate/latest/dg/what-is.html

https://docs.aws.amazon.com/comprehend/latest/dg/how-sentiment.html

A manufacturing company has machine sensors that upload .csv files to an Amazon S3 bucket. These .csv files must be converted into images and must be made available as soon as possible for the automatic generation of graphical reports.

The images become irrelevant after 1 month, but the .csv files must be kept to train machine learning (ML) models twice a year. The ML trainings and audits are planned weeks in advance.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

A.
Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and uploads the images to the S3 bucket.
A.
Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and uploads the images to the S3 bucket.
Answers
B.
Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3 bucket. Invoke the Lambda function when a .csv file is uploaded.
B.
Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3 bucket. Invoke the Lambda function when a .csv file is uploaded.
Answers
C.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30 days.
C.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30 days.
Answers
D.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days.
D.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days.
Answers
E.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
E.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
Answers
Suggested answer: B, C

Explanation:

These answers are correct because they meet the requirements of converting the .csv files into images, making them available as soon as possible, and minimizing the storage costs. AWS Lambda is a service that lets you run code without provisioning or managing servers. You can use AWS Lambda to design a function that converts the .csv files into images and stores the images in the S3 bucket. You can invoke the Lambda function when a .csv file is uploaded to the S3 bucket by using an S3 event notification. This way, you can ensure that the images are generated and made available as soon as possible for the graphical reports. S3 Lifecycle is a feature that enables you to manage your objects so that they are stored cost effectively throughout their lifecycle. You can create S3 Lifecycle rules for .csv files and image files in the S3 bucket to transition them to different storage classes or expire them based on your business needs. You can transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded, since they are only needed twice a year for ML trainings and audits that are planned weeks in advance. S3 Glacier is a storage class for data archiving that offers secure, durable, and extremely low-cost storage with retrieval times ranging from minutes to hours. You can expire the image files after 30 days, since they become irrelevant after 1 month.

https://docs.aws.amazon.com/lambda/latest/dg/welcome.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-glacier

A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.

The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition of Amazon Aurora to host the DB instance.

Which solutions will create the new DB instance? (Select TWO.)

A.
Import the RDS snapshot directly into Aurora.
A.
Import the RDS snapshot directly into Aurora.
Answers
B.
Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
B.
Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
Answers
C.
Upload the database dump to Amazon S3. Then import the database dump into Aurora.
C.
Upload the database dump to Amazon S3. Then import the database dump into Aurora.
Answers
D.
Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
D.
Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
Answers
E.
Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
E.
Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
Answers
Suggested answer: A, C

Explanation:

These answers are correct because they meet the requirements of creating a new DB instance from the most recent backup and using a MySQL-compatible edition of Amazon Aurora to host the DB instance. You can import the RDS snapshot directly into Aurora if the MySQL DB instance and the Aurora DB cluster are running the same version of MySQL. For example, you can restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.6, but you can't restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is simple and requires the fewest number of steps. You can upload the database dump to Amazon S3 and then import the database dump into Aurora if the MySQL DB instance and the Aurora DB cluster are running different versions of MySQL. For example, you can import a MySQL version 5.6 database dump into Aurora MySQL version 5.7, but you can't restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is more flexible and allows you to migrate across different versions of MySQL.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Import.html

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Dump.html

A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in fts corporate data center. The company has a hybrid environment with a 10 Gbps AWS Direct Connect connection.

After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to move the data efficiently and without disruption. The company still needs to be able to access and update the data during the transfer window.

Which solution will meet these requirements?

A.
Create an AWS DataSync agent in the corporate data center. Create a data transfer task. Start the transfer to an Amazon S3 bucket.
A.
Create an AWS DataSync agent in the corporate data center. Create a data transfer task. Start the transfer to an Amazon S3 bucket.
Answers
B.
Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
B.
Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Answers
C.
Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
C.
Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
Answers
D.
Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
D.
Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Answers
Suggested answer: A

Explanation:

This answer is correct because it meets the requirements of moving the data efficiently and without disruption, and still being able to access and update the data during the transfer window. AWS DataSync is an online data movement and discovery service that simplifies and accelerates data migrations to AWS and helps you move data quickly and securely between on-premises storage, edge locations, other clouds, and AWS Storage. You can create an AWS DataSync agent in the corporate data center to connect your NAS system to AWS over the Direct Connect connection. You can create a data transfer task to specify the source location, destination location, and options for transferring the data. You can start the transfer to an Amazon S3 bucket and monitor the progress of the task. DataSync automatically encrypts data in transit and verifies data integrity during transfer. DataSync also supports incremental transfers, which means that only files that have changed since the last transfer are copied. This way, you can ensure that your data is synchronized between your NAS system and S3 bucket, and you can access and update the data during the transfer window.

https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html

https://docs.aws.amazon.com/datasync/latest/userguide/how-datasync-works.html

A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. The company successfully launched a new product. The workload on the database has increased.

The company wants to accommodate the larger workload without adding infrastructure.

Which solution will meet these requirements MOST cost-effectively?

A.
Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
A.
Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
Answers
B.
Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
B.
Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
Answers
C.
Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
C.
Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
Answers
D.
Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
D.
Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
Answers
Suggested answer: A

Explanation:

This answer is correct because it meets the requirements of accommodating the larger workload without adding infrastructure and minimizing the cost. Reserved DB instances are a billing discount applied to the use of certain on-demand DB instances in your account. Reserved DB instances provide you with a significant discount compared to on-demand DB instance pricing. You can buy reserved DB instances for the total workload and choose between three payment options: No Upfront, Partial Upfront, or All Upfront. You can make the Amazon RDS for PostgreSQL DB instance larger by modifying its instance type to a higher performance class. This way, you can increase the CPU, memory, and network capacity of your DB instance and handle the increased workload.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithReservedDBInstances.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html

A company hosts a multi-tier web application on Amazon Linux Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company observes that the Auto Scaling group launches more On-Demand Instances when the application's end users access high volumes of static web content. The company wants to optimize cost.

What should a solutions architect do to redesign the application MOST cost-effectively?

A.
Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
A.
Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
Answers
B.
Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
B.
Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
Answers
C.
Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
C.
Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
Answers
D.
Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
D.
Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
Answers
Suggested answer: C

Explanation:

This answer is correct because it meets the requirements of optimizing cost and reducing the workload on the database. Amazon CloudFront is a content delivery network (CDN) service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. You can create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket, which is an origin that you define for CloudFront. This way, you can offload the requests for static web content from your EC2 instances to CloudFront, which can improve the performance and availability of your website, and reduce the cost of running your EC2 instances.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html

A company is designing a containerized application that will use Amazon Elastic Container Service (Amazon ECS). The application needs to access a shared file system that is highly durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours. The file system needs to provide a mount target in each Availability Zone within a Region.

A solutions architect wants to use AWS Backup to manage the replication to another Region.

Which solution will meet these requirements?

A.
'Amazon FSx for Windows File Server with a Multi-AZ deployment
A.
'Amazon FSx for Windows File Server with a Multi-AZ deployment
Answers
B.
Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
B.
Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
Answers
C.
'Amazon Elastic File System (Amazon EFS) with the Standard storage class
C.
'Amazon Elastic File System (Amazon EFS) with the Standard storage class
Answers
D.
Amazon FSx for OpenZFS
D.
Amazon FSx for OpenZFS
Answers
Suggested answer: B

Explanation:

This answer is correct because it meets the requirements of accessing a shared file system that is highly durable, can recover data to another AWS Region, and can provide a mount target in each Availability Zone within a Region. Amazon FSx for NetApp ONTAP is a fully managed service that provides enterprise-grade data management and storage for your Windows and Linux applications. You can use Amazon FSx for NetApp ONTAP to create file systems that span multiple Availability Zones within an AWS Region, providing high availability and durability. You can also use AWS Backup to manage the replication of your file systems to another AWS Region, with a recovery point objective (RPO) of 8 hours or less. AWS Backup is a fully managed backup service that automates and centralizes backup of data over AWS services. You can use AWS Backup to create backup policies and monitor activity for your AWS resources in one place.

https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/what-is.html

https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html

Total 886 questions
Go to page: of 89