ExamGecko
Home / Amazon / SAA-C03 / List of questions
Ask Question

Amazon SAA-C03 Practice Test - Questions Answers, Page 32

List of questions

Question 311

Report
Export
Collapse

A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP The application processes the data immediately and sends a message back to the device if necessary No data is stored.

The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to another AWS Region Which solution will meet these requirements?

Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer (NLB) in each of the two Regions Configure the NLB to invoke an AWS Lambda function to process the data
Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer (NLB) in each of the two Regions Configure the NLB to invoke an AWS Lambda function to process the data
Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the NLB Process the data in Amazon ECS.
Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the NLB Process the data in Amazon ECS.
Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster. Set the ECS service as the target for the ALB Process the data in Amazon ECS
Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster. Set the ECS service as the target for the ALB Process the data in Amazon ECS
Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer (ALB) in each of the two Regions Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the ALB Process the data in Amazon ECS
Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer (ALB) in each of the two Regions Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the ALB Process the data in Amazon ECS
Suggested answer: B

Explanation:

To meet the requirements of minimizing latency for data transmission from the devices and providing rapid failover to another AWS Region, the best solution would be to use AWS Global Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container Service (Amazon ECS). AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to route traffic to optimal AWS endpoints. With Global Accelerator,


asked 16/09/2024
monet washington
35 questions

Question 312

Report
Export
Collapse

An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers run on Amazon EC2, and the database runs on Amazon RDS for MYSQL. The backend tier communities with the RDS instance. There are frequent calls to return identical database from the database that are causing performance slowdowns. Which action should be taken to improve the performance of the backend?

Implement Amazon SNS to store the database calls.
Implement Amazon SNS to store the database calls.
Implement Amazon ElasticCache to cache the large database.
Implement Amazon ElasticCache to cache the large database.
Implement an RDS for MySQL read replica to cache database calls.
Implement an RDS for MySQL read replica to cache database calls.
Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Suggested answer: B
asked 16/09/2024
Vladimir Kornfeld
41 questions

Question 313

Report
Export
Collapse

A hospital is designing a new application that gathers symptoms from patients. The hospital has decided to use Amazon Simple Queue Service (Amazon SOS) and Amazon Simple Notification Service (Amazon SNS) in the architecture. A solutions architect is reviewing the infrastructure design Data must be encrypted at test and in transit. Only authorized personnel of the hospital should be able to access the data. Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

Turn on server-side encryption on the SQS components Update tie default key policy to restrict key usage to a set of authorized principals.
Turn on server-side encryption on the SQS components Update tie default key policy to restrict key usage to a set of authorized principals.
Turn on server-side encryption on the SNS components by using an AWS Key Management Service(AWS KMS) customer managed key Apply a key policy to restrict key usage to a set of authorized principals.
Turn on server-side encryption on the SNS components by using an AWS Key Management Service(AWS KMS) customer managed key Apply a key policy to restrict key usage to a set of authorized principals.
Turn on encryption on the SNS components Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic pokey to allow only encrypted connections over TLS.
Turn on encryption on the SNS components Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic pokey to allow only encrypted connections over TLS.
Turn on server-side encryption on the SOS components by using an AWS Key Management Service(AWS KMS) customer managed key Apply a key pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections over TLS.
Turn on server-side encryption on the SOS components by using an AWS Key Management Service(AWS KMS) customer managed key Apply a key pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections over TLS.
Turn on server-side encryption on the SOS components by using an AWS Key Management Service(AWS KMS) customer managed key. Apply an IAM pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections over TLS
Turn on server-side encryption on the SOS components by using an AWS Key Management Service(AWS KMS) customer managed key. Apply an IAM pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections over TLS
Suggested answer: B, D
asked 16/09/2024
Neftali Baez-Feliciano
31 questions

Question 314

Report
Export
Collapse

A solutions architect is creating a new VPC design There are two public subnets for the load balancer, two private subnets for web servers and two private subnets for MySQL The web servers use only HTTPS The solutions architect has already created a security group tor the load balancer allowingport 443 from 0 0 0 0/0 Company policy requires that each resource has the teas! access required tostill be able to perform its tasksWhich additional configuration strategy should the solutions architect use to meet theserequirements?

Create a security group for the web servers and allow port 443 from 0.0.0.0/0 Create a security group for the MySQL servers and allow port 3306 from the web servers security group
Create a security group for the web servers and allow port 443 from 0.0.0.0/0 Create a security group for the MySQL servers and allow port 3306 from the web servers security group
Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0 Create a network ACL (or the MySQL servers and allow port 3306 from the web servers security group
Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0 Create a network ACL (or the MySQL servers and allow port 3306 from the web servers security group
Create a security group for the web servers and allow port 443 from the load balancer Create a security group for the MySQL servers and allow port 3306 from the web servers security group
Create a security group for the web servers and allow port 443 from the load balancer Create a security group for the MySQL servers and allow port 3306 from the web servers security group
Create a network ACL 'or the web servers and allow port 443 from the load balancer Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group
Create a network ACL 'or the web servers and allow port 443 from the load balancer Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group
Suggested answer: C
asked 16/09/2024
Jerry Manalo
32 questions

Question 315

Report
Export
Collapse

A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The company would rarely need to access this copy. The storage solution’s cost should be minimal. Which storage solution meets these requirements?

S3 Standard
S3 Standard
S3 Intelligent-Tiering
S3 Intelligent-Tiering
S3 Standard-Infrequent Access (S3 Standard-IA)
S3 Standard-Infrequent Access (S3 Standard-IA)
S3 One Zone-Infrequent Access (S3 One Zone-IA)
S3 One Zone-Infrequent Access (S3 One Zone-IA)
Suggested answer: D

Explanation:


asked 16/09/2024
marius trif
47 questions

Question 316

Report
Export
Collapse

A solutions architect is designing a two-tiered architecture that includes a public subnet and a database subnet. The web servers in the public subnet must be open to the internet on port 443. The Amazon RDS for MySQL D6 instance in the database subnet must be accessible only to the web servers on port 3306.

Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

Create a network ACL for the public subnet Add a rule to deny outbound traffic to 0 0 0 0/0 on port 3306
Create a network ACL for the public subnet Add a rule to deny outbound traffic to 0 0 0 0/0 on port 3306
Create a security group for the DB instance Add a rule to allow traffic from the public subnet CIDR block on port 3306
Create a security group for the DB instance Add a rule to allow traffic from the public subnet CIDR block on port 3306
Create a security group for the web servers in the public subnet Add a rule to allow traffic from 0 0 0 O'O on port 443
Create a security group for the web servers in the public subnet Add a rule to allow traffic from 0 0 0 O'O on port 443
Create a security group for the DB instance Add a rule to allow traffic from the web servers' security group on port 3306
Create a security group for the DB instance Add a rule to allow traffic from the web servers' security group on port 3306
Create a security group for the DB instance Add a rule to deny all traffic except traffic from the web servers' security group on port 3306
Create a security group for the DB instance Add a rule to deny all traffic except traffic from the web servers' security group on port 3306
Suggested answer: B, C

Explanation:

Security groups are virtual firewalls that protect AWS instances and can be applied to EC2, ELB and RDS1. Security groups have rules for inbound and outbound traffic and are stateful, meaning that responses to allowed inbound traffic are allowed to flow out of the instance2. Network ACLs are different from security groups in several ways. They cover entire subnets, not individual instances, and are stateless, meaning that they require rules for both inbound and outbound traffic2. Network ACLs also support deny rules, while security groups only support allow rules2. To meet the requirements of the scenario, the solutions architect should create two security groups: one for the DB instance and one for the web servers in the public subnet. The security group for the DB instance should allow traffic from the public subnet CIDR block on port 3306, which is the default port for MySQL3. This way, only the web servers in the public subnet can access the DB instance on that port. The security group for the web servers should allow traffic from 0 0 0 O'O on port 443, which is the default port for HTTPS4. This way, the web servers can accept secure connections from the internet on that port.



asked 16/09/2024
Leon Duke
37 questions

Question 317

Report
Export
Collapse

A company has an Amazon S3 data lake that is governed by AWS Lake Formation The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database The company wants to enforce columnlevel authorization so that the company's marketing team can access only a subset of columns in the database Which solution will meet these requirements with the LEAST operational overhead?

Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine Include only the required columns
Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine Include only the required columns
Use AWS Glue Studio to ingest the data from the database to the S3 data lake Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight
Use AWS Glue Studio to ingest the data from the database to the S3 data lake Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight
Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3 Create an S3 bucket policy to enforce column-level access control for the QuickSight users Use Amazon S3 as the data source in QuickSight.
Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3 Create an S3 bucket policy to enforce column-level access control for the QuickSight users Use Amazon S3 as the data source in QuickSight.
Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake Use Lake Formation to enforce column-level access control for the QuickSight users Use Amazon Athena as the data source in QuickSight
Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake Use Lake Formation to enforce column-level access control for the QuickSight users Use Amazon Athena as the data source in QuickSight
Suggested answer: D
asked 16/09/2024
marubini mushayathoni
25 questions

Question 318

Report
Export
Collapse

A company has an application that collects data from loT sensors on automobiles. The data is streamed and stored in Amazon S3 through Amazon Kinesis Date Firehose The data produces trillions of S3 objects each year. Each morning, the company uses the data from the previous 30 days to retrain a suite of machine learning (ML) models. Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models The data must be available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes.

Which storage solution meets these requirements MOST cost-effectively?

Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year
Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year
Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year.
Suggested answer: D
asked 16/09/2024
Alajauan Adams
35 questions

Question 319

Report
Export
Collapse

A company recently deployed a new auditing system to centralize information about operating system versions patching and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated Which solution achieves these goals MOST efficiently?

Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated
Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated
Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and terminated
Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and terminated
Run a custom script on the instance operating system to send data to the audit system Configure the script to be invoked by the EC2 Auto Scaling group when the instance starts and is terminated
Run a custom script on the instance operating system to send data to the audit system Configure the script to be invoked by the EC2 Auto Scaling group when the instance starts and is terminated
Suggested answer: B
asked 16/09/2024
Glenn Abdoelkarim
36 questions

Question 320

Report
Export
Collapse

A company has launched an Amazon RDS for MySQL D6 instance Most of the connections to the database come from serverless applications. Application traffic to the database changes significantly at random intervals At limes of high demand, users report that their applications experience database connection rejection errors.

Which solution will resolve this issue with the LEAST operational overhead?

Create a proxy in RDS Proxy Configure the users' applications to use the DB instance through RDS Proxy
Create a proxy in RDS Proxy Configure the users' applications to use the DB instance through RDS Proxy
Deploy Amazon ElastCache for Memcached between the users' application and the DB instance
Deploy Amazon ElastCache for Memcached between the users' application and the DB instance
Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users' applications to use the new DB instance.
Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users' applications to use the new DB instance.
Configure Multi-AZ for the DB instance Configure the users' application to switch between the DB instances.
Configure Multi-AZ for the DB instance Configure the users' application to switch between the DB instances.
Suggested answer: A
asked 16/09/2024
Helmut Steingraber
33 questions
Total 1.002 questions
Go to page: of 101
Search

Related questions