ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 46

Question list
Search
Search

List of questions

Search

Related questions











A solutions architect has deployed a web application that serves users across two AWS Regions under a custom domain The application uses Amazon Route 53 latency-based routing The solutions architect has associated weighted record sets with a pair of web servers in separate Availability Zones for each Region

The solutions architect runs a disaster recovery scenario When all the web servers in one Region are stopped. Route 53 does not automatically redirect users to the other Region

Which of the following are possible root causes of this issue1? (Select TWO)

A.
The weight for the Region where the web servers were stopped is higher than the weight for the other Region.
A.
The weight for the Region where the web servers were stopped is higher than the weight for the other Region.
Answers
B.
One of the web servers in the secondary Region did not pass its HTTP health check
B.
One of the web servers in the secondary Region did not pass its HTTP health check
Answers
C.
Latency resource record sets cannot be used in combination with weighted resource record sets
C.
Latency resource record sets cannot be used in combination with weighted resource record sets
Answers
D.
The setting to evaluate target health is not turned on for the latency alias resource record set that is associated with the domain in the Region where the web servers were stopped.
D.
The setting to evaluate target health is not turned on for the latency alias resource record set that is associated with the domain in the Region where the web servers were stopped.
Answers
E.
An HTTP health check has not been set up for one or more of the weighted resource record sets associated with the stopped web servers
E.
An HTTP health check has not been set up for one or more of the weighted resource record sets associated with the stopped web servers
Answers
Suggested answer: D, E

Explanation:

Evaluate Target Health Setting:

Ensure that the 'Evaluate Target Health' setting is enabled for the latency alias resource record sets in Route 53. This setting helps Route 53 determine the health of the resources associated with the alias record and redirect traffic appropriately.

HTTP Health Checks:

Configure HTTP health checks for all weighted resource record sets. Health checks monitor the availability and performance of the web servers, allowing Route 53 to reroute traffic to healthy servers in case of a failure.

Verify that the health checks are correctly set up and associated with the resource record sets. This ensures that Route 53 can detect server failures and redirect traffic to the servers in the other Region.

By enabling the 'Evaluate Target Health' setting and configuring HTTP health checks, Route 53 can effectively manage traffic during failover scenarios, ensuring high availability and reliability.

Reference

AWS Route 53 Documentation on Latency-Based Routing50.

AWS Architecture Blog on Cross-Account and Cross-Region Setup49.

A company creates an AWS Control Tower landing zone to manage and govern a multi-account AWS environment. The company's security team will deploy preventive controls and detective controls to monitor AWS services across all the accounts. The security team needs a centralized view of the security state of all the accounts.

Which solution will meet these requirements'?

A.
From the AWS Control Tower management account, use AWS CloudFormation StackSets to deploy an AWS Config conformance pack to all accounts in the organization
A.
From the AWS Control Tower management account, use AWS CloudFormation StackSets to deploy an AWS Config conformance pack to all accounts in the organization
Answers
B.
Enable Amazon Detective for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Detective
B.
Enable Amazon Detective for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Detective
Answers
C.
From the AWS Control Tower management account, deploy an AWS CloudFormation stack set that uses the automatic deployment option to enable Amazon Detective for the organization
C.
From the AWS Control Tower management account, deploy an AWS CloudFormation stack set that uses the automatic deployment option to enable Amazon Detective for the organization
Answers
D.
Enable AWS Security Hub for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Security Hub
D.
Enable AWS Security Hub for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Security Hub
Answers
Suggested answer: D

Explanation:

Enable AWS Security Hub:

Navigate to the AWS Security Hub console in your management account and enable Security Hub. This process integrates Security Hub with AWS Control Tower, allowing you to manage and monitor security findings across all accounts within your organization.

Designate a Delegated Administrator:

In AWS Organizations, designate one of the AWS accounts as the delegated administrator for Security Hub. This account will have the responsibility to manage and oversee the security posture of all accounts within the organization.

Deploy Controls Across Accounts:

Use AWS Security Hub to automatically enable security controls across all AWS accounts in the organization. This provides a centralized view of the security state of all accounts and ensures continuous monitoring and compliance.

Utilize AWS Security Hub Features:

Leverage the capabilities of Security Hub to aggregate security alerts, run continuous security checks, and generate findings based on the AWS Foundational Security Best Practices. Security Hub integrates with other AWS services like AWS Config, Amazon GuardDuty, and AWS IAM Access Analyzer to enhance security monitoring and remediation.

By integrating AWS Security Hub with AWS Control Tower and using a delegated administrator account, you can achieve a centralized and comprehensive view of your organization's security posture, facilitating effective management and remediation of security issues.

Reference

AWS Security Hub now integrates with AWS Control Tower77

AWS Control Tower and Security Hub Integration76

AWS Security Hub Features79

A software as a service (SaaS) company provides a media software solution to customers The solution is hosted on 50 VPCs across various AWS Regions and AWS accounts One of the VPCs is designated as a management VPC The compute resources in the VPCs work independently

The company has developed a new feature that requires all 50 VPCs to be able to communicate with each other. The new feature also requires one-way access from each customer's VPC to the company's management VPC The management VPC hosts a compute resource that validates licenses for the media software solution

The number of VPCs that the company will use to host the solution will continue to increase as the solution grows

Which combination of steps will provide the required VPC connectivity with the LEAST operational overhead'' (Select TWO.)

A.
Create a transit gateway Attach all the company's VPCs and relevant subnets to the transit gateway
A.
Create a transit gateway Attach all the company's VPCs and relevant subnets to the transit gateway
Answers
B.
Create VPC peering connections between all the company's VPCs
B.
Create VPC peering connections between all the company's VPCs
Answers
C.
Create a Network Load Balancer (NLB) that points to the compute resource for license validation. Create an AWS PrivateLink endpoint service that is available to each customer's VPC Associate the endpoint service with the NLB
C.
Create a Network Load Balancer (NLB) that points to the compute resource for license validation. Create an AWS PrivateLink endpoint service that is available to each customer's VPC Associate the endpoint service with the NLB
Answers
D.
Create a VPN appliance in each customer's VPC Connect the company's management VPC to each customer's VPC by using AWS Site-to-Site VPN
D.
Create a VPN appliance in each customer's VPC Connect the company's management VPC to each customer's VPC by using AWS Site-to-Site VPN
Answers
E.
Create a VPC peering connection between the company's management VPC and each customer's VPC
E.
Create a VPC peering connection between the company's management VPC and each customer's VPC
Answers
Suggested answer: A, C

Explanation:

Create a Transit Gateway:

Step 1: In the AWS Management Console, navigate to the VPC Dashboard.

Step 2: Select 'Transit Gateways' and click on 'Create Transit Gateway'.

Step 3: Configure the transit gateway by providing a name and setting the options for Amazon side ASN and VPN ECMP support as needed.

Step 4: Attach each of the company's VPCs and relevant subnets to the transit gateway. This centralizes the network management and simplifies the routing configurations, supporting scalable and flexible network architecture.

Set Up AWS PrivateLink:

Step 1: Create a Network Load Balancer (NLB) in the management VPC that points to the compute resource responsible for license validation.

Step 2: Create an AWS PrivateLink endpoint service pointing to this NLB.

Step 3: Allow each customer's VPC to create an interface endpoint to this PrivateLink service. This setup enables secure and private communication between the customer VPCs and the management VPC, ensuring one-way access from each customer's VPC to the management VPC for license validation.

This combination leverages the benefits of AWS Transit Gateway for scalable and centralized routing, and AWS PrivateLink for secure and private service access, meeting the requirement with minimal operational overhead.

Reference

Amazon VPC-to-Amazon VPC Connectivity Options

AWS PrivateLink - Building a Scalable and Secure Multi-VPC AWS Network Infrastructure

Connecting Your VPC to Other VPCs and Networks Using a Transit Gateway

A solutions architect is creating an AWS CloudFormation template from an existing manually created non-production AWS environment The CloudFormation template can be destroyed and recreated as needed The environment contains an Amazon EC2 instance The EC2 instance has an instance profile that the EC2 instance uses to assume a role in a parent account

The solutions architect recreates the role in a CloudFormation template and uses the same role name When the CloudFormation template is launched in the child account, the EC2 instance can no longer assume the role in the parent account because of insufficient permissions

What should the solutions architect do to resolve this issue?

A.
In the parent account edit the trust policy for the role that the EC2 instance needs to assume Ensure that the target role ARN in the existing statement that allows the sts AssumeRole action is correct Save the trust policy
A.
In the parent account edit the trust policy for the role that the EC2 instance needs to assume Ensure that the target role ARN in the existing statement that allows the sts AssumeRole action is correct Save the trust policy
Answers
B.
In the parent account edit the trust policy for the role that the EC2 instance needs to assume Add a statement that allows the sts AssumeRole action for the root principal of the child account Save the trust policy
B.
In the parent account edit the trust policy for the role that the EC2 instance needs to assume Add a statement that allows the sts AssumeRole action for the root principal of the child account Save the trust policy
Answers
C.
Update the CloudFormation stack again Specify only the CAPABILITY_NAMED_IAM capability
C.
Update the CloudFormation stack again Specify only the CAPABILITY_NAMED_IAM capability
Answers
D.
Update the CloudFormation stack again Specify the CAPABIUTYJAM capability and the CAPABILITY_NAMEDJAM capability
D.
Update the CloudFormation stack again Specify the CAPABIUTYJAM capability and the CAPABILITY_NAMEDJAM capability
Answers
Suggested answer: A

Explanation:

Edit the Trust Policy:

Go to the IAM console in the parent account and locate the role that the EC2 instance needs to assume.

Edit the trust policy of the role to ensure that it correctly allows the sts action for the role ARN in the child account.

Update the Role ARN:

Verify that the target role ARN specified in the trust policy matches the role ARN created by the CloudFormation stack in the child account.

If necessary, update the ARN to reflect the correct role in the child account.

Save and Test:

Save the updated trust policy and ensure there are no syntax errors.

Test the setup by attempting to assume the role from the EC2 instance in the child account. Verify that the instance can successfully assume the role and perform the required actions.

This ensures that the EC2 instance in the child account can assume the role in the parent account, resolving the permission issue.

Reference

AWS IAM Documentation on Trust Policies51.

A company is planning to migrate an application from on premises to the AWS Cloud The company will begin the migration by moving the application underlying data storage to AWS The application data is stored on a shared tile system on premises and the application servers connect to the shared file system through SMB

A solutions architect must implement a solution that uses an Amazon S3 bucket for shared storage. Until the application is fully migrated and code is rewritten to use native Amazon S3 APIs the application must continue to have access to the data through SMB The solutions architect must migrate the application data to AWS (o its new location while still allowing the on-premises application to access the data

Which solution will meet these requirements?

A.
Create a new Amazon FSx for Windows File Server file system Configure AWS DataSync with one location for the on-premises file share and one location for the new Amazon FSx file system Create a new DataSync task to copy the data from the on-premises file share location to the Amazon FSx file system
A.
Create a new Amazon FSx for Windows File Server file system Configure AWS DataSync with one location for the on-premises file share and one location for the new Amazon FSx file system Create a new DataSync task to copy the data from the on-premises file share location to the Amazon FSx file system
Answers
B.
Create an S3 bucket for the application Copy the data from the on-premises storage to the S3 bucket
B.
Create an S3 bucket for the application Copy the data from the on-premises storage to the S3 bucket
Answers
C.
Deploy an AWS Server Migration Service (AWS SMS) VM to the on-premises environment Use AWS SMS to migrate the file storage server from on premises to an Amazon EC2 instance
C.
Deploy an AWS Server Migration Service (AWS SMS) VM to the on-premises environment Use AWS SMS to migrate the file storage server from on premises to an Amazon EC2 instance
Answers
D.
Create an S3 bucket for the application Deploy a new AWS Storage Gateway file gateway on an on-premises VM Create a new file share that stores data in the S3 bucket and is associated with the file gateway Copy the data from the on-premises storage to the new file gateway endpoint
D.
Create an S3 bucket for the application Deploy a new AWS Storage Gateway file gateway on an on-premises VM Create a new file share that stores data in the S3 bucket and is associated with the file gateway Copy the data from the on-premises storage to the new file gateway endpoint
Answers
Suggested answer: D

Explanation:

Create an S3 Bucket:

Log in to the AWS Management Console and navigate to Amazon S3.

Create a new S3 bucket that will serve as the destination for the application data.

Deploy AWS Storage Gateway:

Download and deploy the AWS Storage Gateway virtual machine (VM) on your on-premises environment. This VM can be deployed on VMware ESXi, Microsoft Hyper-V, or Linux KVM.

Configure the File Gateway:

Configure the deployed Storage Gateway as a file gateway. This will enable it to present Amazon S3 buckets as SMB file shares to your on-premises applications.

Create a New File Share:

Within the Storage Gateway configuration, create a new file share that is associated with the S3 bucket you created earlier. This file share will use the SMB protocol, allowing your on-premises applications to access the S3 bucket as if it were a local SMB file share.

Copy Data to the File Gateway:

Use your preferred method (such as robocopy, rsync, or similar tools) to copy data from the on-premises storage to the newly created file gateway endpoint. This data will be stored in the S3 bucket, maintaining accessibility through SMB.

Ensure Secure and Efficient Data Transfer:

AWS Storage Gateway ensures that all data in transit is encrypted using TLS, providing secure data transfer to AWS. It also provides local caching for frequently accessed data, improving access performance for on-premises applications.

This approach allows your existing on-premises applications to continue accessing data via SMB while leveraging the scalability and durability of Amazon S3.

Reference

AWS Storage Gateway Overview67.

AWS DataSync and Storage Gateway Hybrid Architecture66.

AWS S3 File Gateway Details68.

A company has developed a new release of a popular video game and wants to make it available for public download The new release package is approximately 5 GB in size. The company provides downloads for existing releases from a Linux-based publicly facing FTP site hosted in an on-premises data center The company expects the new release will be downloaded by users worldwide The company wants a solution that provides improved download performance and low transfer costs regardless of a user's location

Which solutions will meet these requirements'?

A.
Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group Configure an FTP service on the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package
A.
Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group Configure an FTP service on the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package
Answers
B.
Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group Configure an FTP service on each of the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group Publish the game download URL for users to download the package
B.
Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group Configure an FTP service on each of the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group Publish the game download URL for users to download the package
Answers
C.
Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Use Amazon CloudFront for the website Publish the game download URL for users to download the package
C.
Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Use Amazon CloudFront for the website Publish the game download URL for users to download the package
Answers
D.
Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Set Requester Pays for the S3 bucket Publish the game download URL for users to download the package
D.
Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Set Requester Pays for the S3 bucket Publish the game download URL for users to download the package
Answers
Suggested answer: C

Explanation:

Create an S3 Bucket:

Navigate to Amazon S3 in the AWS Management Console and create a new S3 bucket to store the game files. Enable static website hosting on this bucket.

Upload Game Files:

Upload the 5 GB game release package to the S3 bucket. Ensure that the files are publicly accessible if required for download.

Configure Amazon Route 53:

Set up a new domain or subdomain in Amazon Route 53 and point it to the S3 bucket. This allows users to access the game files using a custom URL.

Use Amazon CloudFront:

Create a CloudFront distribution with the S3 bucket as the origin. CloudFront is a content delivery network (CDN) that caches content at edge locations worldwide, improving download performance and reducing latency for users regardless of their location.

Publish the Download URL:

Use the CloudFront distribution URL as the download link for users to access the game files. CloudFront will handle the efficient distribution and caching of the content.

This solution leverages the scalability of Amazon S3 and the performance benefits of CloudFront to provide an optimal download experience for users globally while minimizing costs.

Reference

Amazon CloudFront Documentation

Amazon S3 Static Website Hosting

A company needs to use an AWS Transfer Family SFTP-enabled server with an Amazon S3 bucket to receive updates from a third-party data supplier. The data is encrypted with Pretty Good Privacy (PGP) encryption The company needs a solution that will automatically decrypt the data after the company receives the data

A solutions architect will use a Transfer Family managed workflow The company has created an 1AM service role by using an 1AM policy that allows access to AWS Secrets Manager and the S3 bucket The role's trust relationship allows the transfer amazonaws com service to assume the rote

What should the solutions architect do next to complete the solution for automatic decryption'?

A.
Store the PGP public key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the nominal step Associate the workflow with the Transfer Family server
A.
Store the PGP public key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the nominal step Associate the workflow with the Transfer Family server
Answers
B.
Store the PGP private key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the exception handler Associate the workflow with the SFTP user
B.
Store the PGP private key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the exception handler Associate the workflow with the SFTP user
Answers
C.
Store the PGP private key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the nominal step Associate the workflow with the Transfer Family server
C.
Store the PGP private key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the nominal step Associate the workflow with the Transfer Family server
Answers
D.
Store the PGP public key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP decryption parameters in the exception handler Associate the workflow with the SFTP user
D.
Store the PGP public key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP decryption parameters in the exception handler Associate the workflow with the SFTP user
Answers
Suggested answer: C

Explanation:

Store the PGP Private Key:

Step 1: In the AWS Management Console, navigate to AWS Secrets Manager.

Step 2: Store the PGP private key in Secrets Manager. Ensure the key is encrypted and properly secured.

Set Up the Transfer Family Managed Workflow:

Step 1: In the AWS Transfer Family console, create a new managed workflow.

Step 2: Add a nominal step to the workflow that includes the decryption of the files. Configure this step with the PGP decryption parameters, referencing the PGP private key stored in Secrets Manager.

Step 3: Associate this workflow with the Transfer Family SFTP server, ensuring that incoming files are automatically decrypted upon receipt.

This solution ensures that the data is securely decrypted as it is transferred from the SFTP server to the S3 bucket, automating the decryption process and leveraging AWS Secrets Manager for key management.

Reference

AWS Transfer Family Documentation

Using AWS Secrets Manager for Managing Secrets

AWS Transfer Family Managed Workflows

A software as a service (SaaS) company has developed a multi-tenant environment. The company uses Amazon DynamoDB tables that the tenants share tor the storage layer. The company uses AWS Lambda functions for the application services.

The company wants to offer a tiered subscription model that is based on resource consumption by each tenant Each tenant is identified by a unique tenant ID that is sent as part of each request to the Lambda functions The company has created an AWS Cost and Usage Report (AWS CUR) in an AWS account The company wants to allocate the DynamoDB costs to each tenant to match that tenant's resource consumption

Which solution will provide a granular view of the DynamoDB cost for each tenant with the LEAST operational effort?

A.
Associate a new lag that is named tenant ID with each table in DynamoDB Activate the tag as a cost allocation tag m the AWS Billing and Cost Management console Deploy new Lambda function code to log the tenant ID in Amazon CloudWatch Logs Use the AWS CUR to separate DynamoDB consumption cost for each tenant ID
A.
Associate a new lag that is named tenant ID with each table in DynamoDB Activate the tag as a cost allocation tag m the AWS Billing and Cost Management console Deploy new Lambda function code to log the tenant ID in Amazon CloudWatch Logs Use the AWS CUR to separate DynamoDB consumption cost for each tenant ID
Answers
B.
Configure the Lambda functions to log the tenant ID and the number of RCUs and WCUs consumed from DynamoDB for each transaction to Amazon CloudWatch Logs Deploy another Lambda function to calculate the tenant costs by using the logged capacity units and the overall DynamoDB cost from the AWS Cost Explorer API Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.
B.
Configure the Lambda functions to log the tenant ID and the number of RCUs and WCUs consumed from DynamoDB for each transaction to Amazon CloudWatch Logs Deploy another Lambda function to calculate the tenant costs by using the logged capacity units and the overall DynamoDB cost from the AWS Cost Explorer API Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.
Answers
C.
Create a new partition key that associates DynamoDB items with individual tenants Deploy a Lambda function to populate the new column as part of each transaction Deploy another Lambda function to calculate the tenant costs by using Amazon Athena to calculate the number of tenant items from DynamoDB and the overall DynamoDB cost from the AWS CUR Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule
C.
Create a new partition key that associates DynamoDB items with individual tenants Deploy a Lambda function to populate the new column as part of each transaction Deploy another Lambda function to calculate the tenant costs by using Amazon Athena to calculate the number of tenant items from DynamoDB and the overall DynamoDB cost from the AWS CUR Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule
Answers
D.
Deploy a Lambda function to log the tenant ID the size of each response, and the duration of the transaction call as custom metrics to Amazon CloudWatch Logs Use CloudWatch Logs Insights to query the custom metrics for each tenant. Use AWS Pricing Calculator to obtain the overall DynamoDB costs and to calculate the tenant costs
D.
Deploy a Lambda function to log the tenant ID the size of each response, and the duration of the transaction call as custom metrics to Amazon CloudWatch Logs Use CloudWatch Logs Insights to query the custom metrics for each tenant. Use AWS Pricing Calculator to obtain the overall DynamoDB costs and to calculate the tenant costs
Answers
Suggested answer: B

Explanation:

Log Tenant ID and RCUs/WCUs:

Update the AWS Lambda functions to log the tenant ID and the number of Read Capacity Units (RCUs) and Write Capacity Units (WCUs) consumed from DynamoDB for each transaction. This data will be logged to Amazon CloudWatch Logs.

Calculate Tenant Costs:

Deploy an additional Lambda function that reads the logs from CloudWatch Logs, calculates the RCUs and WCUs used by each tenant, and then uses the AWS Cost Explorer API to retrieve the overall cost of DynamoDB usage. This function will then allocate the costs to each tenant based on their usage.

Scheduled Cost Calculation:

Create an Amazon EventBridge rule to trigger the cost calculation Lambda function at regular intervals (e.g., daily or hourly). This ensures that cost allocation is continuously updated and tenants are billed accurately based on their consumption.

This solution minimizes operational effort by automating the cost allocation process and ensuring that the company can accurately bill tenants based on their resource consumption.

Reference

AWS Cost Explorer Documentation

Amazon CloudWatch Logs Documentation

AWS Lambda Documentation

A company runs an application in (he cloud that consists of a database and a website Users can post data to the website, have the data processed, and have the data sent back to them in an email Data is stored in a MySQL database running on an Amazon EC2 instance The database is running in a VPC with two private subnets The website is running on Apache Tomcat in a single EC2 instance in a different VPC with one public subnet There is a single VPC peering connection between the database and website VPC.

The website has suffered several outages during the last month due to high traffic

Which actions should a solutions architect take to increase the reliability of the application? (Select THREE.)

A.
Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer
A.
Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer
Answers
B.
Provision an additional VPC peering connection
B.
Provision an additional VPC peering connection
Answers
C.
Migrate the MySQL database to Amazon Aurora with one Aurora Replica
C.
Migrate the MySQL database to Amazon Aurora with one Aurora Replica
Answers
D.
Provision two NAT gateways in the database VPC.
D.
Provision two NAT gateways in the database VPC.
Answers
E.
Move the Tomcat server to the database VPC
E.
Move the Tomcat server to the database VPC
Answers
F.
Create an additional public subnet in a different Availability Zone in the website VPC
F.
Create an additional public subnet in a different Availability Zone in the website VPC
Answers
Suggested answer: A, C, F

Explanation:

Auto Scaling Group with Application Load Balancer:

Moving the Tomcat server to an Auto Scaling group ensures that the number of instances adjusts dynamically based on the traffic load. An Application Load Balancer (ALB) distributes incoming traffic across multiple instances, improving the application's reliability and availability.

Migrate to Amazon Aurora with Replica:

Migrating the MySQL database to Amazon Aurora and adding an Aurora Replica enhances the database's scalability and availability. Aurora is optimized for performance, and replicas help distribute read traffic, reducing the load on the primary instance.

Additional Public Subnet:

Creating an additional public subnet in a different Availability Zone enhances fault tolerance. This ensures that the website remains accessible even if one Availability Zone experiences issues.

Reference

AWS Well-Architected Framework

Amazon Aurora Documentation

A company has implemented a new security requirement According to the new requirement, the company must scan all traffic from corporate AWS instances in the company's VPC for violations of the company's security policies. As a result of these scans the company can block access to and from specific IP addresses.

To meet the new requirement, the company deploys a set of Amazon EC2 instances in private subnets to serve as transparent proxies The company installs approved proxy server software on these EC2 instances The company modifies the route tables on all subnets to use the corresponding EC2 instances with proxy software as the default route The company also creates security groups that are compliant with the security policies and assigns these security groups to the EC2 instances

Despite these configurations, the traffic of the EC2 instances in their private subnets is not being properly forwarded to the internet.

What should a solutions architect do to resolve this issue?

A.
Disable source'destination checks on the EC2 instances that run the proxy software
A.
Disable source'destination checks on the EC2 instances that run the proxy software
Answers
B.
Add a rule to the security group that is assigned to the proxy EC2 instances to allow all traffic between instances that have this security group Assign this security group to all EC2 instances in the VPC.
B.
Add a rule to the security group that is assigned to the proxy EC2 instances to allow all traffic between instances that have this security group Assign this security group to all EC2 instances in the VPC.
Answers
C.
Change the VPC's DHCP options set Set the DNS server options to point to the addresses of the proxy EC2 instances
C.
Change the VPC's DHCP options set Set the DNS server options to point to the addresses of the proxy EC2 instances
Answers
D.
Assign one additional elastic network interface to each proxy EC2 instance Ensure that one of these network interfaces has a route to the private subnets Ensure that the other network interface has a route to the internet.
D.
Assign one additional elastic network interface to each proxy EC2 instance Ensure that one of these network interfaces has a route to the private subnets Ensure that the other network interface has a route to the internet.
Answers
Suggested answer: A

Explanation:

Identify Proxy EC2 Instances:

Determine which EC2 instances in the private subnets are running the proxy server software.

Disable Source/Destination Checks:

For each of these EC2 instances, go to the AWS Management Console.

Navigate to the EC2 dashboard, select the instance, and choose 'Actions' > 'Networking' > 'Change Source/Dest. Check'.

Disable the source/destination check for these instances.

Disabling source/destination checks allows the EC2 instances to route traffic appropriately, enabling them to function as network appliances or proxies. This ensures that traffic from other instances in the private subnets can be routed through the proxy instances to the internet, meeting the company's security requirements.

Reference

Amazon EC2 User Guide on Source/Destination Checks

Total 492 questions
Go to page: of 50