ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 42

Question list
Search
Search

List of questions

Search

Related questions











A company use an organization in AWS Organizations to manage multiple AWS accounts. The company hosts some applications in a VPC in the company's snared services account. The company has attached a transit gateway to the VPC in the Shared services account.

The company is developing a new capability and has created a development environment that requires access to the applications that are in the snared services account. The company intends to delete and recreate resources frequently in the development account. The company also wants to give a development team the ability to recreate the team's connection to the shared services account as required.

Which solution will meet these requirements?

A.
Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the snared services transit gateway to automatically accept peering connections.
A.
Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the snared services transit gateway to automatically accept peering connections.
Answers
B.
Turn on automate acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in tie development account. Create a transit gateway attachment in the development account.
B.
Turn on automate acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in tie development account. Create a transit gateway attachment in the development account.
Answers
C.
Turn on automate acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account. Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.
C.
Turn on automate acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account. Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.
Answers
D.
Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment value the development account makes an attachment request. Use AWS Network Manager to store. The transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.
D.
Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment value the development account makes an attachment request. Use AWS Network Manager to store. The transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.
Answers
Suggested answer: B

Explanation:

For a development environment that requires frequent resource recreation and connectivity to applications hosted in a shared services account, the most efficient solution involves using AWS Resource Access Manager (RAM) and the transit gateway in the shared services account. By turning on automatic acceptance for the transit gateway in the shared services account and sharing it with the development account through AWS RAM, the development team can easily recreate their connection as needed without manual intervention. This setup allows for scalable, flexible connectivity between accounts while minimizing operational overhead and ensuring consistent access to shared services.

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzjp compression. The company must retain the log files indefinitely.

A security engineer occasionally analyzes the togs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.

Which solution will meet these requirements with the LARGEST performance improvement?

A.
Create an AWS Lambda function to decompress the gzip flies and to compress the tiles with bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3 event notification for the S3 bucket.
A.
Create an AWS Lambda function to decompress the gzip flies and to compress the tiles with bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3 event notification for the S3 bucket.
Answers
B.
Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded
B.
Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded
Answers
C.
Update the VPC flow log configuration to store the files in Apache Parquet format. Specify Hourly partitions for the log files.
C.
Update the VPC flow log configuration to store the files in Apache Parquet format. Specify Hourly partitions for the log files.
Answers
D.
Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.
D.
Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.
Answers
Suggested answer: C

Explanation:

Converting VPC flow logs to store in Apache Parquet format and specifying hourly partitions significantly improves query performance and reduces storage space usage. Apache Parquet is a columnar storage file format optimized for analytical queries, allowing Athena to scan less data and improve query performance. Partitioning logs by hour further enhances query efficiency by limiting the amount of data scanned during queries, addressing the issue of degrading performance over time due to the growing volume of ingested logs.

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.

Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high toad, resulting in severely elevated query response times.

Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.)

A.
Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
A.
Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
Answers
B.
Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Ama7on CloudWatch alarms to notify administrators when the site fails.
B.
Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Ama7on CloudWatch alarms to notify administrators when the site fails.
Answers
C.
Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route S3 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
C.
Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route S3 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
Answers
D.
Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
D.
Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
Answers
E.
Configure an Amazon Elastic ache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.
E.
Configure an Amazon Elastic ache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.
Answers
Suggested answer: A, E

Explanation:

Configuring read replicas for Amazon RDS MySQL and using the single reader endpoint in the web application can significantly reduce the load on the backend database tier, improving overall application performance. Additionally, implementing an Amazon ElastiCache cluster between the web application and RDS MySQL instances can further reduce database load by caching frequently accessed data, thereby enhancing the application's resilience and scalability. These changes address the root cause of the outage by alleviating the database tier's high load and preventing similar issues in the future.


A company has many services running in its on-premises data center. The data center is connected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data is sensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are using AWS.

Which solution will meet these requirements?

A.
Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network Load Balancer, and make the service available over DX.
A.
Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network Load Balancer, and make the service available over DX.
Answers
B.
Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind an Application Load Balancer, and make the service available over DX.
B.
Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind an Application Load Balancer, and make the service available over DX.
Answers
C.
Attach an internet gateway to the VPC. and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.
C.
Attach an internet gateway to the VPC. and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.
Answers
D.
Attach a NAT gateway to the VPC. and ensue that network access control and security group rules allow the relevant inbound and outbound traffic.
D.
Attach a NAT gateway to the VPC. and ensue that network access control and security group rules allow the relevant inbound and outbound traffic.
Answers
Suggested answer: B

Explanation:

To offer services to other companies using AWS without traversing the internet, creating a VPC Endpoint Service hosted behind an Application Load Balancer (ALB) and making it available over AWS Direct Connect (DX) is the most suitable solution. This approach ensures that the service traffic remains within the AWS network, adhering to the requirement that connectivity must not traverse the internet. An ALB is capable of handling HTTP/HTTPS traffic, making it appropriate for web-based services. Utilizing DX for connectivity between the on-premises data center and AWS further secures and optimizes the network path.

AWS Direct Connect Documentation: Explains how to set up DX for private connectivity between AWS and an on-premises network.

Amazon VPC Endpoint Services (AWS PrivateLink) Documentation: Provides details on creating and configuring endpoint services for private, secure access to services hosted in AWS.

AWS Application Load Balancer Documentation: Offers guidance on configuring ALBs to distribute HTTP/HTTPS traffic efficiently.

A company is developing an application that will display financial reports. The company needs a solution that can store financial Information that comes from multiple systems. The solution must provide the reports through a web interface and must serve the data will less man 500 milliseconds or latency to end users. The solution also must be highly available and must have an RTO or 30 seconds.

Which solution will meet these requirements?

A.
Use an Amazon Redshift cluster to store the data. Use a state website that is hosted on Amazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service (Amazon EKS) cluster to provide the reports to the application.
A.
Use an Amazon Redshift cluster to store the data. Use a state website that is hosted on Amazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service (Amazon EKS) cluster to provide the reports to the application.
Answers
B.
Use Amazon S3 to store the data Use Amazon Athena to provide the reports to the application. Use AWS App Runner to serve the application to view the reports.
B.
Use Amazon S3 to store the data Use Amazon Athena to provide the reports to the application. Use AWS App Runner to serve the application to view the reports.
Answers
C.
Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStight dashboard with direct Query datasets to provide the reports to the application.
C.
Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStight dashboard with direct Query datasets to provide the reports to the application.
Answers
D.
Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS Elastic Beanstalk to provide the reports to the application.
D.
Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS Elastic Beanstalk to provide the reports to the application.
Answers
Suggested answer: C

Explanation:

For an application requiring low-latency access to financial information and high availability with a Recovery Time Objective (RTO) of 30 seconds, using Amazon DynamoDB for data storage and Amazon QuickSight for reporting is the most suitable solution. DynamoDB offers fast, consistent, and single-digit millisecond latency for data retrieval, meeting the latency requirements. QuickSight's ability to directly query DynamoDB datasets and provide embedded dashboards for reporting enables real-time financial report generation. This combination ensures high availability and meets the RTO requirement, providing a robust solution for the application's needs.

Amazon DynamoDB Documentation: Describes the features and benefits of DynamoDB, emphasizing its performance and scalability for applications requiring low-latency access to data.

Amazon QuickSight Documentation: Provides information on using QuickSight for creating and embedding interactive dashboards, including direct querying of DynamoDB datasets for real-time data visualization.

A company that is developing a mobile game is making game assets available in two AWS Regions. Game assets are served from a set of Amazon EC2 instances behind an Application Load Balancer (ALB) in each Region. The company requires game assets to be fetched from the closest Region. If game assess become unavailable in the closest Region, they should the fetched from the other Region.

What should a solutions architect do to meet these requirement?

A.
Create an Amazon CloudFront distribution. Create an origin group with one origin for each ALB. Set one of the origins as primary.
A.
Create an Amazon CloudFront distribution. Create an origin group with one origin for each ALB. Set one of the origins as primary.
Answers
B.
Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failover routing record pointing to the two ALBs. Set the Evaluate Target Health value Yes.
B.
Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failover routing record pointing to the two ALBs. Set the Evaluate Target Health value Yes.
Answers
C.
Create two Amazon CloudFront distributions, each with one ALB as the origin. Create an Amazon Route 53 failover routing record pointing to the two CloudFront distributions. Set the Evaluate Target Health value to Yes.
C.
Create two Amazon CloudFront distributions, each with one ALB as the origin. Create an Amazon Route 53 failover routing record pointing to the two CloudFront distributions. Set the Evaluate Target Health value to Yes.
Answers
D.
Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency alias record pointing to the two ALBs. Set the Evaluate Target Health value to Yes.
D.
Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency alias record pointing to the two ALBs. Set the Evaluate Target Health value to Yes.
Answers
Suggested answer: A

Explanation:

To ensure that game assets are fetched from the closest region and have a fallback option in case the assets become unavailable in the closest region, a solution architect should leverage Amazon CloudFront, a global content delivery network (CDN) service. By creating an Amazon CloudFront distribution and setting up origin groups, the architect can specify multiple origins (in this case, the Application Load Balancers in each region). The primary origin will serve content under normal circumstances, and if the content becomes unavailable, CloudFront will automatically switch to the secondary origin. This approach not only meets the requirement of regional proximity and redundancy but also optimizes latency and enhances the gaming experience by serving assets from the nearest geographical location to the end-user.

A large company is migrating ils entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that supports both development and test environments. New accounts to support production workloads will be needed soon.

The finance department requires a centralized method for payment but must maintain visibility into each group's spending to allocate costs.

The security team requires a centralized mechanism to control 1AM usage in all the company's accounts.

What combination of the following options meet the company's needs with the LEAST effort? (Select TWO.)

A.
Use a collection of parameterized AWS CloudFormation templates defining common 1AM permissions that are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.
A.
Use a collection of parameterized AWS CloudFormation templates defining common 1AM permissions that are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.
Answers
B.
Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarchy. Invite the existing accounts to join the organization and create new accounts using Organizations.
B.
Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarchy. Invite the existing accounts to join the organization and create new accounts using Organizations.
Answers
C.
Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
C.
Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
Answers
D.
Enable all features of AWS Organizations and establish appropriate service control policies that filter 1AM permissions for sub-accounts.
D.
Enable all features of AWS Organizations and establish appropriate service control policies that filter 1AM permissions for sub-accounts.
Answers
E.
Consolidate all of the company's AWS accounts into a single AWS account. Use tags for billing purposes and the lAM's Access Advisor feature to enforce the least privilege model.
E.
Consolidate all of the company's AWS accounts into a single AWS account. Use tags for billing purposes and the lAM's Access Advisor feature to enforce the least privilege model.
Answers
Suggested answer: B, D

Explanation:

Option B is correct because AWS Organizations allows a company to create a new organization from a chosen payer account and define an organizational unit hierarchy. This way, the finance department can have a centralized method for payment but also maintain visibility into each group's spending to allocate costs. The company can also invite the existing accounts to join the organization and create new accounts using Organizations, which simplifies the account management process.

Option D is correct because enabling all features of AWS Organizations and establishing appropriate service control policies (SCPs) that filter IAM permissions for sub-accounts allows the security team to have a centralized mechanism to control IAM usage in all the company's accounts. SCPs are policies that specify the maximum permissions for an organization or organizational unit (OU), and they can be used to restrict access to certain services or actions across all accounts in an organization.

Option A is incorrect because using a collection of parameterized AWS CloudFormation templates defining common IAM permissions that are launched into each account requires more effort than using SCPs. Moreover, it does not provide a centralized mechanism to control IAM usage, as each account would have to launch the appropriate stacks to enforce the least privilege model.

Option C is incorrect because requiring each business unit to use its own AWS accounts does not provide a centralized method for payment or a centralized mechanism to control IAM usage. Tagging each AWS account appropriately and enabling Cost Explorer to administer chargebacks may help with cost allocation, but it is not as efficient as using AWS Organizations.

Option E is incorrect because consolidating all of the company's AWS accounts into a single AWS account does not provide visibility into each group's spending or a way to control IAM usage for different business units. Using tags for billing purposes and the IAM's Access Advisor feature to enforce the least privilege model may help with cost optimization and security, but it is not as scalable or flexible as using AWS Organizations.

AWS Organizations

Service Control Policies

AWS CloudFormation

Cost Explorer

IAM Access Advisor

A company that provisions job boards for a seasonal workforce is seeing an increase in traffic and usage. The backend services run on a pair of Amazon EC2 instances behind an Application Load Balancer with Amazon DynamoDB as the datastore. Application read and write traffic is slow during peak seasons.

Which option provides a scalable application architecture to handle peak seasons with the LEAST development effort?

A.
Migrate the backend services to AWS Lambda. Increase the read and write capacity of DynamoDB.
A.
Migrate the backend services to AWS Lambda. Increase the read and write capacity of DynamoDB.
Answers
B.
Migrate the backend services to AWS Lambda. Configure DynamoDB to use global tables.
B.
Migrate the backend services to AWS Lambda. Configure DynamoDB to use global tables.
Answers
C.
Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling.
C.
Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling.
Answers
D.
Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.
D.
Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.
Answers
Suggested answer: C

Explanation:

Option C is correct because using Auto Scaling groups for the backend services allows the company to scale up or down the number of EC2 instances based on the demand and traffic. This way, the backend services can handle more requests during peak seasons without compromising performance or availability. Using DynamoDB auto scaling allows the company to adjust the provisioned read and write capacity of the table or index automatically based on the actual traffic patterns.This way, the table or index can handle sudden increases or decreases in workload without throttling or overprovisioning1.

Option A is incorrect because migrating the backend services to AWS Lambda may require significant development effort to rewrite the code and test the functionality. Moreover, increasing the read and write capacity of DynamoDB manually may not be efficient or cost-effective, as it does not account for the variability of the workload.The company may end up paying for unused capacity or experiencing throttling if the workload exceeds the provisioned capacity1.

Option B is incorrect because migrating the backend services to AWS Lambda may require significant development effort to rewrite the code and test the functionality. Moreover, configuring DynamoDB to use global tables may not be necessary or beneficial for the company, as global tables are mainly used for replicating data across multiple AWS Regions for fast local access and disaster recovery.Global tables do not automatically scale the provisioned capacity of each replica table; they still require manual or auto scaling settings2.

Option D is incorrect because using Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB may introduce additional complexity and latency to the application architecture. Amazon SQS is a message queue service that decouples and coordinates the components of a distributed system. AWS Lambda is a serverless compute service that runs code in response to events. Using these services may require significant development effort to integrate them with the backend services and DynamoDB,Moreover, they may not improve the read performance of DynamoDB, which may also be affected by high traffic3.

Auto Scaling groups

DynamoDB auto scaling

AWS Lambda

DynamoDB global tables

AWS Lambda vs EC2: Comparison of AWS Compute Resources - Simform

Managing throughput capacity automatically with DynamoDB auto scaling - Amazon DynamoDB

AWS Aurora Global Database vs. DynamoDB Global Tables

Amazon Simple Queue Service (SQS)

A company is migrating to the cloud. It wants to evaluate the configurations of virtual machines in its existing data center environment to ensure that it can size new Amazon EC2 instances accurately. The company wants to collect metrics, such as CPU. memory, and disk utilization, and it needs an inventory of what processes are running on each instance. The company would also like to monitor network connections to map communications between servers.

Which would enable the collection of this data MOST cost effectively?

A.
Use AWS Application Discovery Service and deploy the data collection agent to each virtual machine in the data center.
A.
Use AWS Application Discovery Service and deploy the data collection agent to each virtual machine in the data center.
Answers
B.
Configure the Amazon CloudWatch agent on all servers within the local environment and publish metrics to Amazon CloudWatch Logs.
B.
Configure the Amazon CloudWatch agent on all servers within the local environment and publish metrics to Amazon CloudWatch Logs.
Answers
C.
Use AWS Application Discovery Service and enable agentless discovery in the existing visualization environment.
C.
Use AWS Application Discovery Service and enable agentless discovery in the existing visualization environment.
Answers
D.
Enable AWS Application Discovery Service in the AWS Management Console and configure the corporate firewall to allow scans over a VPN.
D.
Enable AWS Application Discovery Service in the AWS Management Console and configure the corporate firewall to allow scans over a VPN.
Answers
Suggested answer: A

Explanation:

The AWS Application Discovery Service can help plan migration projects by collecting data about on-premises servers, such as configuration, performance, and network connections. The data collection agent is a lightweight software that can be installed on each server to gather this information. This option is more cost-effective than agentless discovery, which requires deploying a virtual appliance in the VMware environment, or using CloudWatch agent, which incurs additional charges for CloudWatch Logs. Scanning the servers over a VPN is not a valid option for AWS Application Discovery Service.Reference:What is AWS Application Discovery Service?,Data collection methods

A company has a website that runs on four Amazon EC2 instances that are behind an Application Load Balancer (ALB). When the ALB detects that an EC2 instance is no longer available, an Amazon CloudWatch alarm enters the ALARM state. A member of the company's operations team then manually adds a new EC2 instance behind the ALB.

A solutions architect needs to design a highly available solution that automatically handles the replacement of EC2 instances. The company needs to minimize downtime during the switch to the new solution.

Which set of steps should the solutions architect take to meet these requirements?

A.
Delete the existing ALB. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instances to the Auto Scaling group.
A.
Delete the existing ALB. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instances to the Auto Scaling group.
Answers
B.
Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing ALB. Attach the existing EC2 instances to the Auto Scaling group.
B.
Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing ALB. Attach the existing EC2 instances to the Auto Scaling group.
Answers
C.
Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait for the Auto Scaling group to launch the minimum number of EC2 instances.
C.
Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait for the Auto Scaling group to launch the minimum number of EC2 instances.
Answers
D.
Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing ALB. Wait for the existing ALB to register the existing EC2 instances with the Auto Scaling group.
D.
Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to the existing ALB. Wait for the existing ALB to register the existing EC2 instances with the Auto Scaling group.
Answers
Suggested answer: B

Explanation:

The Auto Scaling group can automatically launch and terminate EC2 instances based on the demand and health of the web application. The launch template can specify the configuration of the EC2 instances, such as the AMI, instance type, security group, and user data. The existing ALB can distribute the traffic to the EC2 instances in the Auto Scaling group. The existing EC2 instances can be attached to the Auto Scaling group without deleting them or the ALB. This option minimizes downtime and preserves the current setup of the web application.Reference:[What is Amazon EC2 Auto Scaling?], [Launch templates], [Attach a load balancer to your Auto Scaling group], [Attach EC2 instances to your Auto Scaling group]

Total 492 questions
Go to page: of 50