ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 62

Question list
Search
Search

List of questions

Search

Related questions











A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The company sets up a dedicated monitoring member account in the organization. The company wants to query and visualize observability data across the accounts by using Amazon CloudWatch.

Which solution will meet these requirements?

A.
Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
A.
Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
Answers
B.
Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account under the Organizations root organizational unit (OU).
B.
Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account under the Organizations root organizational unit (OU).
Answers
C.
Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to have access to query and visualize the CloudWatch data in the account. Attach the new IAM policy to the new I AM user.
C.
Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to have access to query and visualize the CloudWatch data in the account. Attach the new IAM policy to the new I AM user.
Answers
D.
Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account. Attach the IAM policies to the new IAM user.
D.
Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account. Attach the IAM policies to the new IAM user.
Answers
Suggested answer: A

Explanation:

CloudWatch cross-account observability is a feature that allows you to monitor and troubleshoot applications that span multiple accounts within a Region.You can seamlessly search, visualize, and analyze your metrics, logs, traces, and Application Insights applications in any of the linked accounts without account boundaries1. To enable CloudWatch cross-account observability, you need to set up one or more AWS accounts as monitoring accounts and link them with multiple source accounts. A monitoring account is a central AWS account that can view and interact with observability data shared by other accounts.A source account is an individual AWS account that shares observability data and resources with one or more monitoring accounts1. To create links between monitoring accounts and source accounts, you can use the CloudWatch console, the AWS CLI, or the AWS API.You can also use AWS Organizations to link accounts in an organization or organizational unit to the monitoring account1. CloudWatch provides a CloudFormation template that you can deploy in each source account to share observability data with the monitoring account. The template creates a sink resource in the monitoring account and an observability link resource in the source account.The template also creates the necessary IAM roles and policies to allow cross-account access to the observability data2. Therefore, the solution that meets the requirements of the question is to enable CloudWatch cross-account observability for the monitoring account and deploy the CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.

The other options are not valid because:

Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization.SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization's access control guidelines3. SCPs do not provide access to CloudWatch in the monitoring account, but rather restrict the actions that users and roles can perform in the source accounts.SCPs are not required to enable CloudWatch cross-account observability, as the CloudFormation template creates the necessary IAM roles and policies for cross-account access2.

IAM users are entities that you create in AWS to represent the people or applications that use them to interact with AWS.IAM users can have permissions to access the resources in your AWS account4. Configuring a new IAM user in the monitoring account and an IAM policy in each AWS account to have access to query and visualize the CloudWatch data in the account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient.Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.

Cross-account IAM policies are policies that allow you to delegate access to resources that are in different AWS accounts that you own.You attach a cross-account policy to a user or group in one account, and then specify which accounts the user or group can access5. Creating a new IAM user in the monitoring account and cross-account IAM policies in each AWS account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would also require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient.Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.

A company's web application that is hosted in the AWS Cloud recently increased in popularity. The web application currently exists on a single Amazon EC2 instance in a single public subnet. The web application has not been able to meet the demand of the increased web traffic.

The company needs a solution that will provide high availability and scalability to meet the increased user demand without rewriting the web application.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Replace the EC2 instance with a larger compute optimized instance.
A.
Replace the EC2 instance with a larger compute optimized instance.
Answers
B.
Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.
B.
Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.
Answers
C.
Configure a NAT gateway in a public subnet to handle web requests.
C.
Configure a NAT gateway in a public subnet to handle web requests.
Answers
D.
Replace the EC2 instance with a larger memory optimized instance.
D.
Replace the EC2 instance with a larger memory optimized instance.
Answers
E.
Configure an Application Load Balancer in a public subnet to distribute web traffic
E.
Configure an Application Load Balancer in a public subnet to distribute web traffic
Answers
Suggested answer: B, E

Explanation:

These two steps will meet the requirements because they will provide high availability and scalability for the web application without rewriting it. Amazon EC2 Auto Scaling allows you to automatically adjust the number of EC2 instances in response to changes in demand. By configuring Auto Scaling with multiple Availability Zones in private subnets, you can ensure that your web application is distributed across isolated and fault-tolerant locations, and that your instances are not directly exposed to the internet. An Application Load Balancer operates at the application layer and distributes incoming web traffic across multiple targets, such as EC2 instances, containers, or Lambda functions. By configuring an Application Load Balancer in a public subnet, you can enable your web application to handle requests from the internet and route them to the appropriate targets in the private subnets.

What is Amazon EC2 Auto Scaling?

What is an Application Load Balancer?

A financial services company wants to shut down two data centers and migrate more than 100 TB of data to AWS. The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders. Most of the data is unstructured, and the company's file storage consists of SMB-based storage types from multiple vendors. The company does not want to change its applications to access the data after migration.

What should a solutions architect do to meet these requirements with the LEAST operational overhead?

A.
Use AWS Direct Connect to migrate the data to Amazon S3.
A.
Use AWS Direct Connect to migrate the data to Amazon S3.
Answers
B.
Use AWS DataSync to migrate the data to Amazon FSx for Lustre.
B.
Use AWS DataSync to migrate the data to Amazon FSx for Lustre.
Answers
C.
Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
C.
Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
Answers
D.
Use AWS Direct Connect to migrate the data on-premises file storage to an AWS Storage Gateway volume gateway.
D.
Use AWS Direct Connect to migrate the data on-premises file storage to an AWS Storage Gateway volume gateway.
Answers
Suggested answer: C

Explanation:

AWS DataSync is a data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect1. AWS DataSync can transfer data to Amazon FSx for Windows File Server, which is a fully managed file system that is accessible over the industry-standard Server Message Block (SMB) protocol.Amazon FSx for Windows File Server is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration2. This solution meets the requirements of the question because:

It can migrate more than 100 TB of data to AWS within a reasonable time frame, as AWS DataSync is optimized for high-speed and efficient data transfer1.

It can preserve the intricate directory structure and the millions of small files stored in deep hierarchies of subfolders, as AWS DataSync can handle complex file structures and metadata, such as file names, permissions, and timestamps1.

It can avoid changing the applications to access the data after migration, as Amazon FSx for Windows File Server supports the same SMB protocol and Windows Server features that the company's on-premises file storage uses2.

It can reduce the operational overhead, as AWS DataSync and Amazon FSx for Windows File Server are fully managed services that handle the tasks of setting up, configuring, and maintaining the data transfer and the file system12.

A company has a multi-tier payment processing application that is based on virtual machines (VMs). The communication between the tiers occurs asynchronously through a third-party middleware solution that guarantees exactly-once delivery.

The company needs a solution that requires the least amount of infrastructure management. The solution must guarantee exactly-once delivery for application messaging

Which combination of actions will meet these requirements? (Select TWO.)

A.
Use AWS Lambda for the compute layers in the architecture.
A.
Use AWS Lambda for the compute layers in the architecture.
Answers
B.
Use Amazon EC2 instances for the compute layers in the architecture.
B.
Use Amazon EC2 instances for the compute layers in the architecture.
Answers
C.
Use Amazon Simple Notification Service (Amazon SNS) as the messaging component between the compute layers.
C.
Use Amazon Simple Notification Service (Amazon SNS) as the messaging component between the compute layers.
Answers
D.
Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.
D.
Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.
Answers
E.
Use containers that are based on Amazon Elastic Kubemetes Service (Amazon EKS) for the compute layers in the architecture.
E.
Use containers that are based on Amazon Elastic Kubemetes Service (Amazon EKS) for the compute layers in the architecture.
Answers
Suggested answer: A, D

Explanation:

This solution meets the requirements because it requires the least amount of infrastructure management and guarantees exactly-once delivery for application messaging. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You only pay for the compute time you consume. Lambda scales automatically with the size of your workload. Amazon SQS FIFO queues are designed to ensure that messages are processed exactly once, in the exact order that they are sent. FIFO queues have high availability and deliver messages in a strict first-in, first-out order. You can use Amazon SQS to decouple and scale microservices, distributed systems, and serverless applications.Reference:AWS Lambda,Amazon SQS FIFO queues

A company has established a new AWS account. The account is newly provisioned and no changes have been made to the default settings. The company is concerned about the security of the AWS account root user.

What should be done to secure the root user?

A.
Create IAM users for daily administrative tasks. Disable the root user.
A.
Create IAM users for daily administrative tasks. Disable the root user.
Answers
B.
Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
B.
Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
Answers
C.
Generate an access key for the root user Use the access key for daily administration tasks instead of the AWS Management Console.
C.
Generate an access key for the root user Use the access key for daily administration tasks instead of the AWS Management Console.
Answers
D.
Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks.
D.
Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks.
Answers
Suggested answer: B

Explanation:

This answer is the most secure and recommended option for securing the root user of a new AWS account. The root user is the identity that has complete access to all AWS services and resources in the account. It is accessed by signing in with the email address and password that were used to create the account. To protect the root user credentials from unauthorized use, AWS advises the following best practices:

Create IAM users for daily administrative tasks. IAM users are identities that you create in your account that have specific permissions to access AWS resources. You can create individual IAM users for yourself and for others who need access to your account. You can also assign IAM users to IAM groups that have a set of policies that grant permissions to perform common tasks. By using IAM users instead of the root user, you can follow the principle of least privilege and reduce the risk of compromising your account.

Enable multi-factor authentication (MFA) on the root user. MFA is a security feature that requires users to prove their identity by providing two pieces of information: their password and a code from a device that only they have access to. By enabling MFA on the root user, you can add an extra layer of protection to your account and prevent unauthorized access even if your password is compromised.

Limit the tasks you perform with the root user account. You should use the root user only for tasks that require root user credentials, such as changing your account settings, closing your account, or managing consolidated billing. For a complete list of tasks that require root user credentials, seeTasks that require root user credentials. For all other tasks, you should use IAM users or roles that have the appropriate permissions.

AWS account root user

Root user best practices for your AWS account

Tasks that require root user credentials

A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security mandate requires the solutions architect to launch all Amazon EC2 instances in a private subnet. However, when the solutions architect launches an EC2 instance that runs a web server on ports 80 and 443 in a private subnet, no external internet traffic can connect to the server.

What should the solutions architect do to resolve this issue?

A.
Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record for the website resolves to the Auto Scaling group identifier.
A.
Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record for the website resolves to the Auto Scaling group identifier.
Answers
B.
Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALB. Ensure that the DNS record for the website resolves to the ALB.
B.
Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALB. Ensure that the DNS record for the website resolves to the ALB.
Answers
C.
Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a default route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
C.
Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a default route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
Answers
D.
Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80 and HTTPS traffic on port 443. Ensure that the DNS record for the website resolves to the public IP address of the EC2 instance.
D.
Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80 and HTTPS traffic on port 443. Ensure that the DNS record for the website resolves to the public IP address of the EC2 instance.
Answers
Suggested answer: B

Explanation:

An Application Load Balancer (ALB) is a type of Elastic Load Balancer (ELB) that distributes incoming application traffic across multiple targets, such as EC2 instances, containers, Lambda functions, and IP addresses, in multiple Availability Zones1. An ALB can be internet-facing or internal.An internet-facing ALB has a public DNS name that clients can use to send requests over the internet1.An internal ALB has a private DNS name that clients can use to send requests within a VPC1. This solution meets the requirements of the question because:

It allows external internet traffic to connect to the web server on ports 80 and 443, as the ALB listens for requests on these ports and forwards them to the EC2 instance in the private subnet1.

It does not violate the corporate security mandate, as the EC2 instance is launched in a private subnet and does not have a public IP address or a route to an internet gateway2.

It reduces the operational overhead, as the ALB is a fully managed service that handles the tasks of load balancing, health checking, scaling, and security1.

A company has an application that uses Docker containers in its local data center The application runs on a container host that stores persistent data in a volume on the host. The container instances use the stored persistent data.

The company wants to move the application to a fully managed service because the company does not want to manage any servers or storage infrastructure.

Which solution will meet these requirements?

A.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent volume mounted in the containers.
A.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent volume mounted in the containers.
Answers
B.
Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
B.
Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
Answers
C.
Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon S3 bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
C.
Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon S3 bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
Answers
D.
Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
D.
Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
Answers
Suggested answer: B

Explanation:

This solution meets the requirements because it allows the company to move the application to a fully managed service without managing any servers or storage infrastructure. AWS Fargate is a serverless compute engine for containers that runs the Amazon ECS tasks. With Fargate, the company does not need to provision, configure, or scale clusters of virtual machines to run containers. Amazon EFS is a fully managed file system that can be accessed by multiple containers concurrently. With EFS, the company does not need to provision and manage storage capacity. EFS provides a simple interface to create and configure file systems quickly and easily. The company can use the EFS volume as a persistent storage volume mounted in the containers to store the persistent data. The company can also use the EFS mount helper to simplify the mounting process.Reference:Amazon ECS on AWS Fargate,Using Amazon EFS file systems with Amazon ECS,Amazon EFS mount helper.

An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.

Which solution resolves this issue in the MOST cost-effective way?

A.
Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
A.
Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
Answers
B.
Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
B.
Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
Answers
C.
Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type
C.
Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type
Answers
D.
Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage
D.
Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage
Answers
Suggested answer: A

Explanation:

Amazon Aurora Serverless v2 is a cost-effective solution that can automatically scale the database capacity up and down based on the application's needs. It can handle unpredictable traffic spikes without requiring any provisioning or management of database instances.It is compatible with PostgreSQL and offers high performance, availability, and durability1.Reference:1: AWS Ramp-Up Guide: Architect2, page 312: AWS Certified Solutions Architect - Associate exam guide3, page 9.

A company uses AWS Organizations. The company wants to operate some of its AWS accounts with different budgets. The company wants to receive alerts and automatically prevent provisioning of additional resources on AWS accounts when the allocated budget threshold is met during a specific period.

Which combination of solutions will meet these requirements? (Select THREE.)

A.
Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of the required AWS accounts.
A.
Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of the required AWS accounts.
Answers
B.
Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required AWS accounts.
B.
Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required AWS accounts.
Answers
C.
Create an IAM user for AWS Budgets to run budget actions with the required permissions.
C.
Create an IAM user for AWS Budgets to run budget actions with the required permissions.
Answers
D.
Create an IAM role for AWS Budgets to run budget actions with the required permissions.
D.
Create an IAM role for AWS Budgets to run budget actions with the required permissions.
Answers
E.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate config rule to prevent provisioning of additional resources.
E.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate config rule to prevent provisioning of additional resources.
Answers
F.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate service control policy (SCP) to prevent provisioning of additional resources.
F.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate service control policy (SCP) to prevent provisioning of additional resources.
Answers
Suggested answer: B, D, F

Explanation:

To use AWS Budgets to create and manage budgets for different AWS accounts, the company needs to do the following steps:

Use AWS Budgets to create a budget for each AWS account that needs a different budget amount. The budget can be based on cost or usage metrics, and can have different time periods, filters, and thresholds.The company can set the budget amount under the Billing dashboards of the required AWS accounts1.

Create an IAM role for AWS Budgets to run budget actions with the required permissions. A budget action is a response that AWS Budgets initiates when a budget exceeds a specified threshold.The IAM role allows AWS Budgets to perform actions on behalf of the company, such as applying an IAM policy or a service control policy (SCP) to restrict the provisioning of additional resources2.

Add an alert to notify the company when each account meets its budget threshold. The alert can be sent via email or Amazon SNS. The company can also add a budget action that selects the IAM role created and the appropriate SCP to prevent provisioning of additional resources. An SCP is a type of policy that can be applied to an AWS account or an organizational unit (OU) within AWS Organizations.An SCP can limit the actions that users and roles can perform in the account or OU3.

4: https://aws.amazon.com/budgets/

1: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-create.html

2: https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-controls.html

3: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html

A gaming company wants to launch a new internet-facing application in multiple AWS Regions The application will use the TCP and UDP protocols for communication. The company needs to provide high availability and minimum latency for global users.

Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)

A.
Create internal Network Load Balancers in front of the application in each Region.
A.
Create internal Network Load Balancers in front of the application in each Region.
Answers
B.
Create external Application Load Balancers in front of the application in each Region.
B.
Create external Application Load Balancers in front of the application in each Region.
Answers
C.
Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
C.
Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
Answers
D.
Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
D.
Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
Answers
E.
Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region.
E.
Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region.
Answers
Suggested answer: B, C

Explanation:

This combination of actions will provide high availability and minimum latency for global users by using AWS Global Accelerator and Application Load Balancers. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your internet-facing applications by using the AWS global network.It provides two global static public IPs that act as a fixed entry point to your application endpoints, such as Application Load Balancers, in multiple Regions1. Global Accelerator uses the AWS backbone network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure.It also offers TCP and UDP support, traffic encryption, and DDoS protection2. Application Load Balancers are external load balancers that distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones.They support both HTTP and HTTPS (SSL/TLS) protocols, and offer advanced features such as content-based routing, health checks, and integration with other AWS services3. By creating external Application Load Balancers in front of the application in each Region, you can ensure that the application can handle varying load patterns and scale on demand. By creating an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region, you can leverage the performance, security, and availability of the AWS global network to deliver the best possible user experience.

Total 886 questions
Go to page: of 89