ExamGecko
Home / Amazon / SAA-C03 / List of questions
Ask Question

Amazon SAA-C03 Practice Test - Questions Answers, Page 22

List of questions

Question 211

Report
Export
Collapse

A company has a Microsoft NET application that runs on an on-premises Windows Server Trie application stores data by using an Oracle Database Standard Edition server The company is planning a migration to AWS and wants to minimize development changes while moving the application The

AWS application environment should be highly available. Which combination of actions should the company take to meet these requirements? (Select TWO )

Refactor the application as serverless with AWS Lambda functions running NET Core
Refactor the application as serverless with AWS Lambda functions running NET Core
Rehost the application in AWS Elastic Beanstalk with the NET platform in a Muti-AZ deployment
Rehost the application in AWS Elastic Beanstalk with the NET platform in a Muti-AZ deployment
Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI)
Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI)
Use AWS Database Migration Service (AWS DMS) to migrate trom the Oracle database to Amazon DynamoDB in a Multi-AZ deployment
Use AWS Database Migration Service (AWS DMS) to migrate trom the Oracle database to Amazon DynamoDB in a Multi-AZ deployment
Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment
Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment
Suggested answer: B, E
asked 16/09/2024
Test Test
25 questions

Question 212

Report
Export
Collapse

A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary. Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?

Use an Amazon Aurora global database with a pilot light deployment
Use an Amazon Aurora global database with a pilot light deployment
Use an Amazon Aurora global database with a warm standby deployment
Use an Amazon Aurora global database with a warm standby deployment
Use an Amazon RDS Multi-AZ DB instance wilh a pilot light deployment
Use an Amazon RDS Multi-AZ DB instance wilh a pilot light deployment
Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment
Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disasterrecovery-options-in-the-cloud.html

asked 16/09/2024
Ricardo Monsalve
38 questions

Question 213

Report
Export
Collapse

A company's order system sends requests from clients to Amazon EC2 instances. The EC2 instances process the orders and then store the orders in a database on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that can process orders automatically if a system outage occurs. What should a solutions architect do to meet these requirements?

Move (he EC2 Instances into an Auto Scaling group Create an Amazon EventBhdge (Amazon CloudWatch Events) rule to target an Amazon Elastic Container Service (Amazon ECS) task
Move (he EC2 Instances into an Auto Scaling group Create an Amazon EventBhdge (Amazon CloudWatch Events) rule to target an Amazon Elastic Container Service (Amazon ECS) task
Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB) Update the order system to send messages to the ALB endpoint.
Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB) Update the order system to send messages to the ALB endpoint.
Move the EC2 instances into an Auto Scaling group Configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue Configure the EC2 instances to consume messages from the queue
Move the EC2 instances into an Auto Scaling group Configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue Configure the EC2 instances to consume messages from the queue
Create an Amazon Simple Notification Service (Amazon SNS) topic Create an AWS Lambda function, and subscribe the function to the SNS topic Configure the order system to send messages to the SNS topic Send a command to the EC2 instances to process the messages by using AWS Systems Manager Run Command
Create an Amazon Simple Notification Service (Amazon SNS) topic Create an AWS Lambda function, and subscribe the function to the SNS topic Configure the order system to send messages to the SNS topic Send a command to the EC2 instances to process the messages by using AWS Systems Manager Run Command
Suggested answer: C
asked 16/09/2024
Oliver Buss
29 questions

Question 214

Report
Export
Collapse

A company runs an application on a large fleet of Amazon EC2 instances. The application reads and write entries into an Amazon DynamoDB table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost and development effort.

Which solution meets these requirements?

Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack.
Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack.
Use an EC2 Instance that runs a monitoring application from AWS Marketplace Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table Use a script that runs on the EC2 instance to delele items that have a timestamp that is older than 30 days
Use an EC2 Instance that runs a monitoring application from AWS Marketplace Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table Use a script that runs on the EC2 instance to delele items that have a timestamp that is older than 30 days
Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table Configure the Lambda function to delete items in the table that are older than 30 days
Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table Configure the Lambda function to delete items in the table that are older than 30 days
Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the (able Configure DynamoDB to use the attribute as (he TTL attribute
Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the (able Configure DynamoDB to use the attribute as (he TTL attribute
Suggested answer: D

Explanation:

Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload's needs. TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:

Remove user or sensor data after one year of inactivity in an application.

Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.

Retain sensitive data for a certain amount of time according to contractual or regulatory obligations. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

asked 16/09/2024
Allen Yang
38 questions

Question 215

Report
Export
Collapse

A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead. Which solution meets these requirements?

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoOB on EC2 for data storage
Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoOB on EC2 for data storage
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB tor data storage
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB tor data storage
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
Suggested answer: D

Explanation:

Answer: D

Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.

https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html

asked 16/09/2024
piera d'addelfio
33 questions

Question 216

Report
Export
Collapse

A company selves a dynamic website from a flee! of Amazon EC2 instances behind an Application Load Balancer (ALB) The website needs to support multiple languages to serve customers around the world The website's architecture is running in the us-west-1 Region and is exhibiting high request latency tor users that are located in other parts of the world The website needs to serve requests quickly and efficiently regardless of a user's location However the company does not want to recreate the existing architecture across multiple Regions

What should a solutions architect do to meet these requirements?

Replace the existing architecture with a website that is served from an Amazon S3 bucket Configure an Amazon CloudFront distribution with the S3 bucket as the origin Set the cache behavior settings to cache based on the Accept- Languege request header
Replace the existing architecture with a website that is served from an Amazon S3 bucket Configure an Amazon CloudFront distribution with the S3 bucket as the origin Set the cache behavior settings to cache based on the Accept- Languege request header
Configure an Amazon CloudFront distribution with the ALB as the origin Set the cache behavior settings to cache based on the Accept-Language request header
Configure an Amazon CloudFront distribution with the ALB as the origin Set the cache behavior settings to cache based on the Accept-Language request header
Create an Amazon API Gateway API that is integrated with the ALB Configure the API to use the HTTP integration type Set up an API Gateway stage to enable the API cache based on the AcceptLanguage request header
Create an Amazon API Gateway API that is integrated with the ALB Configure the API to use the HTTP integration type Set up an API Gateway stage to enable the API cache based on the AcceptLanguage request header
Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region Put all the EC2 instances and the ALB behind an Amazon Route 53 record set with a geotocation routing policy
Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region Put all the EC2 instances and the ALB behind an Amazon Route 53 record set with a geotocation routing policy
Suggested answer: B
asked 16/09/2024
SCOTTIE EASTER
40 questions

Question 217

Report
Export
Collapse

A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files must be stored for 7 years for auditing purposes.

Which solution will meet these requirements?

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.
Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.
Suggested answer: B

Explanation:

Amazon Transcribe now supports speaker labeling for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to convert speech-to-text.In live audio transcription, each stream of audio may contain multiple speakers. Now you can conveniently turn on the ability to label speakers, thus helping to identify who is saying what in theoutput transcript. https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-transcribe- supports-speaker-labeling-streaming-transcription/


asked 16/09/2024
Anthony Wilson
42 questions

Question 218

Report
Export
Collapse

A company is building a new dynamic ordering website. The company wants to minimize server maintenance and patching. The website must be highly available and must scale read and write capacity as quickly as possible to meet changes in user demand.

Which solution will meet these requirements?

Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon DynamoDB with on-demand capacity for the database Configure Amazon CtoudFront to deliver the website content
Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon DynamoDB with on-demand capacity for the database Configure Amazon CtoudFront to deliver the website content
Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon Aurora with Aurora Auto Scaling for the database Configure Amazon CloudFront to deliver the website content
Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon Aurora with Aurora Auto Scaling for the database Configure Amazon CloudFront to deliver the website content
Host al the website content on Amazon EC2 instances Create an Auto Scaling group to scale the EC2 Instances Use an Application Load Balancer to distribute traffic Use Amazon DynamoDB with provisioned write capacity for the database
Host al the website content on Amazon EC2 instances Create an Auto Scaling group to scale the EC2 Instances Use an Application Load Balancer to distribute traffic Use Amazon DynamoDB with provisioned write capacity for the database
Host at the website content on Amazon EC2 instances Create an Auto Scaling group to scale the EC2 instances Use an Application Load Balancer to distribute traffic Use Amazon Aurora with Aurora Auto Scaling for the database
Host at the website content on Amazon EC2 instances Create an Auto Scaling group to scale the EC2 instances Use an Application Load Balancer to distribute traffic Use Amazon Aurora with Aurora Auto Scaling for the database
Suggested answer: A
asked 16/09/2024
Paul Hackett
38 questions

Question 219

Report
Export
Collapse

A company hosts its application on AWS The company uses Amazon Cognito to manage users When users log in to the application the application fetches required data from Amazon DynamoOB by using a REST API that is hosted in Amazon API Gateway. The company wants an AWS managed solution that will control access to the REST API to reduce development efforts

Which solution will meet these requirements with the LEAST operational overhead?

Configure an AWS Lambda function to be an authorize! in API Gateway to validate which user made the request
Configure an AWS Lambda function to be an authorize! in API Gateway to validate which user made the request
For each user, create and assign an API key that must be sent with each request Validate the key by using an AWS Lambda function
For each user, create and assign an API key that must be sent with each request Validate the key by using an AWS Lambda function
Send the user's email address in the header with every request Invoke an AWS Lambda function to validate that the user with that email address has proper access
Send the user's email address in the header with every request Invoke an AWS Lambda function to validate that the user with that email address has proper access
Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request
Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request
Suggested answer: D
asked 16/09/2024
Ibrahim SACCA
38 questions

Question 220

Report
Export
Collapse

A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

Use AWS Snowball.
Use AWS Snowball.
Use AWS DataSync.
Use AWS DataSync.
Use a secure VPN connection.
Use a secure VPN connection.
Use Amazon S3 Transfer Acceleration.
Use Amazon S3 Transfer Acceleration.
Suggested answer: A

Explanation:

AWS Snowball is a secure data transport solution that accelerates moving large amounts of data into and out of the AWS cloud. It can move up to 80 TB of data at a time, and provides a network bandwidth of up to 50 Mbps, so it is well- suited for the task. Additionally, it is secure and easy to use, making it the ideal solution for this migration.

asked 16/09/2024
Eduardo Collado
29 questions
Total 1.002 questions
Go to page: of 101
Search

Related questions