ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 47

Question list
Search
Search

List of questions

Search

Related questions











A company that develops consumer electronics with offices in Europe and Asia has 60 TB of software images stored on premises in Europe The company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region New software images are created daily and must be encrypted in transit The company needs a solution that does not require custom development to automatically transfer all existing and new software images to Amazon S3

What is the next step in the transfer process?

A.
Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket
A.
Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket
Answers
B.
Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration
B.
Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration
Answers
C.
Use an AWS Snowball device to transfer the images with the S3 bucket as the target
C.
Use an AWS Snowball device to transfer the images with the S3 bucket as the target
Answers
D.
Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload
D.
Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload
Answers
Suggested answer: A

Explanation:

Deploy AWS DataSync Agent:

Install the DataSync agent on your on-premises environment. This can be done by downloading the agent as a virtual appliance and deploying it on VMware ESXi, Hyper-V, or KVM hypervisors.

Configure Source and Destination Locations:

Set up the source location pointing to your on-premises storage where the software images are currently stored.

Configure the destination location to point to your Amazon S3 bucket in the ap-northeast-1 Region.

Create and Schedule DataSync Tasks:

Create a DataSync task to automate the transfer process. This task will specify the source and destination locations and set options for how the data should be transferred.

Schedule the task to run at intervals that suit your data transfer requirements, ensuring new images are transferred as they are created.

Encryption in Transit:

AWS DataSync automatically encrypts data in transit using TLS, ensuring that your data is secure during the transfer process.

Monitoring and Management:

Use the DataSync console or the AWS CLI to monitor the progress of your data transfers and manage the tasks.

AWS DataSync is an efficient solution that automates and accelerates the process of transferring large amounts of data to AWS, handling encryption, data integrity checks, and optimizing network usage without requiring custom development.

Reference

AWS Storage Blog on DataSync40.

AWS DataSync Documentation41.

An events company runs a ticketing platform on AWS. The company's customers configure and schedule their events on the platform The events result in large increases of traffic to the platform The company knows the date and time of each customer's events

The company runs the platform on an Amazon Elastic Container Service (Amazon ECS) cluster The ECS cluster consists of Amazon EC2 On-Demand Instances that are in an Auto Scaling group. The Auto Scaling group uses a predictive scaling policy

The ECS cluster makes frequent requests to an Amazon S3 bucket to download ticket assets The ECS cluster and the S3 bucket are in the same AWS Region and the same AWS account Traffic between the ECS cluster and the S3 bucket flows across a NAT gateway

The company needs to optimize the cost of the platform without decreasing the platform's availability

Which combination of steps will meet these requirements? (Select TWO)

A.
Create a gateway VPC endpoint for the S3 bucket
A.
Create a gateway VPC endpoint for the S3 bucket
Answers
B.
Add another ECS capacity provider that uses an Auto Scaling group of Spot Instances Configure the new capacity provider strategy to have the same weight as the existing capacity provider strategy
B.
Add another ECS capacity provider that uses an Auto Scaling group of Spot Instances Configure the new capacity provider strategy to have the same weight as the existing capacity provider strategy
Answers
C.
Create On-Demand Capacity Reservations for the applicable instance type for the time period of the scheduled scaling policies
C.
Create On-Demand Capacity Reservations for the applicable instance type for the time period of the scheduled scaling policies
Answers
D.
Enable S3 Transfer Acceleration on the S3 bucket
D.
Enable S3 Transfer Acceleration on the S3 bucket
Answers
E.
Replace the predictive scaling policy with scheduled scaling policies for the scheduled events
E.
Replace the predictive scaling policy with scheduled scaling policies for the scheduled events
Answers
Suggested answer: A, B

Explanation:

Gateway VPC Endpoint for S3:

Create a gateway VPC endpoint for Amazon S3 in your VPC. This allows instances in your VPC to communicate with Amazon S3 without going through the internet, reducing data transfer costs and improving security.

Add Spot Instances to ECS Cluster:

Add another ECS capacity provider that uses an Auto Scaling group of Spot Instances. Configure this new capacity provider to share the load with the existing On-Demand Instances by setting an appropriate weight in the capacity provider strategy. Spot Instances offer significant cost savings compared to On-Demand Instances.

Configure Capacity Provider Strategy:

Adjust the ECS service's capacity provider strategy to utilize both On-Demand and Spot Instances effectively. This ensures a balanced distribution of tasks across both instance types, optimizing cost while maintaining availability.

By implementing a gateway VPC endpoint for S3 and incorporating Spot Instances into the ECS cluster, the company can significantly reduce operational costs without compromising on the availability or performance of the platform.

Reference

AWS Cost Optimization Blog on VPC Endpoints

AWS ECS Documentation on Capacity Providers

A company uses AWS Organizations to manage its development environment. Each development team at the company has its own AWS account Each account has a single VPC and CIDR blocks that do not overlap.

The company has an Amazon Aurora DB cluster in a shared services account All the development teams need to work with live data from the DB cluster

Which solution will provide the required connectivity to the DB cluster with the LEAST operational overhead?

A.
Create an AWS Resource Access Manager (AWS RAM) resource share tor the DB cluster. Share the DB cluster with all the development accounts
A.
Create an AWS Resource Access Manager (AWS RAM) resource share tor the DB cluster. Share the DB cluster with all the development accounts
Answers
B.
Create a transit gateway in the shared services account Create an AWS Resource Access Manager (AWS RAM) resource share for the transit gateway Share the transit gateway with all the development accounts Instruct the developers to accept the resource share Configure networking.
B.
Create a transit gateway in the shared services account Create an AWS Resource Access Manager (AWS RAM) resource share for the transit gateway Share the transit gateway with all the development accounts Instruct the developers to accept the resource share Configure networking.
Answers
C.
Create an Application Load Balancer (ALB) that points to the IP address of the DB cluster Create an AWS PrivateLink endpoint service that uses the ALB Add permissions to allow each development account to connect to the endpoint service
C.
Create an Application Load Balancer (ALB) that points to the IP address of the DB cluster Create an AWS PrivateLink endpoint service that uses the ALB Add permissions to allow each development account to connect to the endpoint service
Answers
D.
Create an AWS Site-to-Site VPN connection in the shared services account Configure networking Use AWS Marketplace VPN software in each development account to connect to the Site-to-Site VPN connection
D.
Create an AWS Site-to-Site VPN connection in the shared services account Configure networking Use AWS Marketplace VPN software in each development account to connect to the Site-to-Site VPN connection
Answers
Suggested answer: B

Explanation:

Create a Transit Gateway:

In the shared services account, create a new AWS Transit Gateway. This serves as a central hub to connect multiple VPCs, simplifying the network topology and management.

Configure Transit Gateway Attachments:

Attach the VPC containing the Aurora DB cluster to the transit gateway. This allows the shared services VPC to communicate through the transit gateway.

Create Resource Share with AWS RAM:

Use AWS Resource Access Manager (AWS RAM) to create a resource share for the transit gateway. Share this resource with all development accounts. AWS RAM allows you to securely share your AWS resources across AWS accounts without needing to duplicate them.

Accept Resource Shares in Development Accounts:

Instruct each development team to log into their respective AWS accounts and accept the transit gateway resource share. This step is crucial for enabling cross-account access to the shared transit gateway.

Configure VPC Attachments in Development Accounts:

Each development account needs to attach their VPC to the shared transit gateway. This allows their VPCs to route traffic through the transit gateway to the Aurora DB cluster in the shared services account.

Update Route Tables:

Update the route tables in each VPC to direct traffic intended for the Aurora DB cluster through the transit gateway. This ensures that network traffic is properly routed between the development VPCs and the shared services VPC.

Using a transit gateway simplifies the network management and reduces operational overhead by providing a scalable and efficient way to interconnect multiple VPCs across different AWS accounts.

Reference

AWS Database Blog on RDS Proxy for Cross-Account Access48.

AWS Architecture Blog on Cross-Account and Cross-Region Aurora Setup49.

DEV Community on Managing Multiple AWS Accounts with Organizations51.

A delivery company is running a serverless solution in tneAWS Cloud The solution manages user data, delivery information and past purchase details The solution consists of several microservices The central user service stores sensitive data in an Amazon DynamoDB table Several of the other microservices store a copy of parts of the sensitive data in different storage services

The company needs the ability to delete user information upon request As soon as the central user service deletes a user every other microservice must also delete its copy of the data immediately

Which solution will meet these requirements?

A.
Activate DynamoDB Streams on the DynamoDB table Create an AWS Lambda trigger for the DynamoDB stream that will post events about user deletion in an Amazon Simple Queue Service (Amazon SQS) queue Configure each microservice to poll the queue and delete the user from the DynamoDB table
A.
Activate DynamoDB Streams on the DynamoDB table Create an AWS Lambda trigger for the DynamoDB stream that will post events about user deletion in an Amazon Simple Queue Service (Amazon SQS) queue Configure each microservice to poll the queue and delete the user from the DynamoDB table
Answers
B.
Set up DynamoDB event notifications on the DynamoDB table Create an Amazon Simple Notification Service (Amazon SNS) topic as a target for the DynamoDB event notification Configure each microservice to subscribe to the SNS topic and to delete the user from the DynamoDB table
B.
Set up DynamoDB event notifications on the DynamoDB table Create an Amazon Simple Notification Service (Amazon SNS) topic as a target for the DynamoDB event notification Configure each microservice to subscribe to the SNS topic and to delete the user from the DynamoDB table
Answers
C.
Configure the central user service to post an event on a custom Amazon EventBridge event bus when the company deletes a user Create an EventBndge rule for each microservice to match the user deletion event pattern and invoke logic in the microservice to delete the user from the DynamoDB table
C.
Configure the central user service to post an event on a custom Amazon EventBridge event bus when the company deletes a user Create an EventBndge rule for each microservice to match the user deletion event pattern and invoke logic in the microservice to delete the user from the DynamoDB table
Answers
D.
Configure the central user service to post a message on an Amazon Simple Queue Service (Amazon SQS) queue when the company deletes a user Configure each microservice to create an event filter on the SQS queue and to delete the user from the DynamoDB table
D.
Configure the central user service to post a message on an Amazon Simple Queue Service (Amazon SQS) queue when the company deletes a user Configure each microservice to create an event filter on the SQS queue and to delete the user from the DynamoDB table
Answers
Suggested answer: C

Explanation:

Set Up EventBridge Event Bus:

Step 1: Open the Amazon EventBridge console and create a custom event bus. This bus will be used to handle user deletion events.

Step 2: Name the event bus appropriately (e.g., user-deletion-bus).

Post Events on User Deletion:

Step 1: Modify the central user service to post an event to the custom EventBridge event bus whenever a user is deleted.

Step 2: Ensure the event includes relevant details such as the user ID and any other necessary metadata.

Create EventBridge Rules for Microservices:

Step 1: For each microservice that needs to delete user data, create a new rule in EventBridge that triggers on the user deletion event.

Step 2: Define the event pattern to match the user deletion event. This pattern should include the event details posted by the central user service.

Invoke Microservice Logic:

Step 1: Configure the EventBridge rule to invoke a target, such as an AWS Lambda function, which contains the logic to delete the user data from the microservice's data store.

Step 2: Each microservice should have its Lambda function or equivalent logic to handle the deletion of user data upon receiving the event.

Using Amazon EventBridge ensures a scalable, reliable, and decoupled approach to handle the deletion of user data across multiple microservices. This setup allows each microservice to independently process user deletion events without direct dependencies on other services.

Reference

AWS EventBridge Documentation

DynamoDB Streams and AWS Lambda Triggers

Implementing the Transactional Outbox Pattern with EventBridge Pipes (AWS Documentation) (Amazon Web Services, Inc.) (Amazon Web Services, Inc.) (AWS Documentation) (AWS Cloud Community).

To abide by industry regulations, a solutions architect must design a solution that will store a company's critical data in multiple public AWS Regions, including in the United States, where the company's headquarters is located The solutions architect is required to provide access to the data stored in AWS to the company's global WAN network The security team mandates that no traffic accessing this data should traverse the public internet

How should the solutions architect design a highly available solution that meets the requirements and is cost-effective'?

A.
Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use the company WAN to send traffic over to the headquarters and then to the respective DX connection to access the data
A.
Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use the company WAN to send traffic over to the headquarters and then to the respective DX connection to access the data
Answers
B.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use inter-region VPC peering to access the data in other AWS Regions
B.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use inter-region VPC peering to access the data in other AWS Regions
Answers
C.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use an AWS transit VPC solution to access data in other AWS Regions
C.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use an AWS transit VPC solution to access data in other AWS Regions
Answers
D.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use Direct Connect Gateway to access data in other AWS Regions.
D.
Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection Use Direct Connect Gateway to access data in other AWS Regions.
Answers
Suggested answer: D

Explanation:

Establish AWS Direct Connect Connections:

Step 1: Set up two AWS Direct Connect (DX) connections from the company headquarters to a chosen AWS Region. This provides a redundant and high-availability setup to ensure continuous connectivity.

Step 2: Ensure that these DX connections terminate in a specific Direct Connect location associated with the chosen AWS Region.

Use Company WAN:

Step 1: Configure the company's global WAN to route traffic through the established Direct Connect connections.

Step 2: This setup ensures that all traffic between the company's headquarters and AWS does not traverse the public internet, maintaining compliance with security requirements.

Set Up Direct Connect Gateway:

Step 1: Create a Direct Connect Gateway in the AWS Management Console. This gateway allows you to connect your Direct Connect connections to multiple VPCs across different AWS Regions.

Step 2: Associate the Direct Connect Gateway with the VPCs in the various Regions where your critical data is stored. This enables access to data in multiple Regions through a single Direct Connect connection.

By using Direct Connect and Direct Connect Gateway, the company can achieve secure, reliable, and cost-effective access to data stored across multiple AWS Regions without using the public internet, ensuring compliance with industry regulations.

Reference

AWS Direct Connect Documentation

Building a Scalable and Secure Multi-VPC AWS Network Infrastructure (AWS Documentation) (AWS Documentation).

A company has developed an application that is running Windows Server on VMware vSphere VMs that the company hosts on premises The application data is stored in a proprietary format that must be read through the application The company manually provisioned the servers and the application

As part of its disaster recovery plan, the company wants the ability to host its application on AWS temporarily if the company's on-premises environment becomes unavailable The company wants the application to return to on-premises hosting after a disaster recovery event is complete The RPO is 5 minutes.

Which solution meets these requirements with the LEAST amount of operational overhead?

A.
Configure AWS DataSync Replicate the data to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and attach the EBS volumes
A.
Configure AWS DataSync Replicate the data to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and attach the EBS volumes
Answers
B.
Configure AWS Elastic Disaster Recovery Replicate the data to replication Amazon EC2 instances that are attached to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable use Elastic Disaster Recovery to launch EC2 instances that use the replicated volumes
B.
Configure AWS Elastic Disaster Recovery Replicate the data to replication Amazon EC2 instances that are attached to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable use Elastic Disaster Recovery to launch EC2 instances that use the replicated volumes
Answers
C.
Provision an AWS Storage Gateway file gateway. Replicate the data to an Amazon S3 bucket When the on-premises environment is unavailable, use AWS Backup to restore the data to Amazon Elastic Block Store (Amazon EBS) volumes and launch Amazon EC2 instances from these EBS volumes
C.
Provision an AWS Storage Gateway file gateway. Replicate the data to an Amazon S3 bucket When the on-premises environment is unavailable, use AWS Backup to restore the data to Amazon Elastic Block Store (Amazon EBS) volumes and launch Amazon EC2 instances from these EBS volumes
Answers
D.
Provision an Amazon FSx for Windows File Server file system on AWS Replicate the data to the file system When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and use AWS CloudFormation Init commands to mount the Amazon FSx file shares
D.
Provision an Amazon FSx for Windows File Server file system on AWS Replicate the data to the file system When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and use AWS CloudFormation Init commands to mount the Amazon FSx file shares
Answers
Suggested answer: B

Explanation:

Set Up AWS Elastic Disaster Recovery:

Navigate to the AWS Elastic Disaster Recovery (DRS) console.

Configure the Elastic Disaster Recovery service to replicate your on-premises VMware vSphere VMs to Amazon EC2 instances. This involves installing the AWS Replication Agent on your VMs.

Configure Replication Settings:

Define the replication settings, including the Amazon EC2 instance type and the Amazon EBS volume configuration. Ensure that the replication frequency meets your Recovery Point Objective (RPO) of 5 minutes.

Monitor Data Replication:

Monitor the initial data replication process in the Elastic Disaster Recovery console. Once the initial sync is complete, the status should show as 'Healthy' indicating that the data replication is up-to-date and within the RPO requirements.

Disaster Recovery (Failover):

In the event of a disaster, initiate a failover from the Elastic Disaster Recovery console. This will launch the replicated Amazon EC2 instances using the Amazon EBS volumes with the latest data.

Failback Process:

Once the on-premises environment is restored, perform a failback operation to synchronize the data from AWS back to your on-premises VMware environment. Use the failback client provided by AWS Elastic Disaster Recovery to ensure data consistency and minimal downtime during the failback process.

Using AWS Elastic Disaster Recovery provides a low-overhead, automated solution for disaster recovery that ensures minimal data loss and meets the RPO requirement of 5 minutes (Amazon Web Services, Inc.) (AWS Documentation).

A company needs to improve the security of its web-based application on AWS. The application uses Amazon CloudFront with two custom origins. The first custom origin routes requests to an Amazon API Gateway HTTP API. The second custom origin routes traffic to an Application Load Balancer (ALB) The application integrates with an OpenlD Connect (OIDC) identity provider (IdP) for user management.

A security audit shows that a JSON Web Token (JWT) authorizer provides access to the API The security audit also shows that the ALB accepts requests from unauthenticated users

A solutions architect must design a solution to ensure that all backend services respond to only authenticated users

Which solution will meet this requirement?

A.
Configure the ALB to enforce authentication and authorization by integrating the ALB with the IdP Allow only authenticated users to access the backend services
A.
Configure the ALB to enforce authentication and authorization by integrating the ALB with the IdP Allow only authenticated users to access the backend services
Answers
B.
Modify the CloudFront configuration to use signed URLs Implement a permissive signing policy that allows any request to access the backend services
B.
Modify the CloudFront configuration to use signed URLs Implement a permissive signing policy that allows any request to access the backend services
Answers
C.
Create an AWS WAF web ACL that filters out unauthenticated requests at the ALB level. Allow only authenticated traffic to reach the backend services.
C.
Create an AWS WAF web ACL that filters out unauthenticated requests at the ALB level. Allow only authenticated traffic to reach the backend services.
Answers
D.
Enable AWS CloudTrail to log all requests that come to the ALB Create an AWS Lambda function to analyze the togs and block any requests that come from unauthenticated users.
D.
Enable AWS CloudTrail to log all requests that come to the ALB Create an AWS Lambda function to analyze the togs and block any requests that come from unauthenticated users.
Answers
Suggested answer: A

Explanation:

Integrate ALB with OIDC IdP:

In the AWS Management Console, navigate to the Application Load Balancer (ALB) settings.

Configure the ALB to use the OpenID Connect (OIDC) IdP for authentication. This ensures that all requests routed through the ALB are authenticated using the IdP.

Set Up Authentication Rules:

Create a listener rule on the ALB that requires authentication. This rule will forward requests to the IdP for user authentication before allowing access to the backend services.

Restrict Unauthenticated Access:

Ensure the ALB only forwards requests to backend services if the user is authenticated. Unauthenticated requests should be blocked or redirected to the IdP for authentication.

Update CloudFront Configuration:

Modify the CloudFront distribution to forward authenticated requests to the ALB. Ensure that the ALB and API Gateway accept only requests coming through the CloudFront distribution to enforce consistent authentication and security.

By enforcing authentication at the ALB level, you ensure that all backend services are accessed only by authenticated users, enhancing the overall security of the web application

A company is running its solution on AWS in a manually created VPC. The company is using AWS CloudFormation to provision other parts of the infrastructure According to a new requirement the company must manage all infrastructure in an automatic way

What should the comp any do to meet this new requirement with the LEAST effort?

A.
Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions the existing VPC resources and configuration Use AWS CDK to import the VPC into the stack and to manage the VPC
A.
Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions the existing VPC resources and configuration Use AWS CDK to import the VPC into the stack and to manage the VPC
Answers
B.
Create a CloudFormation stack set that creates the VPC Use the stack set to import the VPC into the stack
B.
Create a CloudFormation stack set that creates the VPC Use the stack set to import the VPC into the stack
Answers
C.
Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration From the CloudFormation console, create a new stack by importing the existing resources
C.
Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration From the CloudFormation console, create a new stack by importing the existing resources
Answers
D.
Create a new CloudFormation template that creates the VPC Use the AWS Serverless Application Model (AWS SAM) CLI to import the VPC
D.
Create a new CloudFormation template that creates the VPC Use the AWS Serverless Application Model (AWS SAM) CLI to import the VPC
Answers
Suggested answer: C

Explanation:

Creating the Template:

Start by creating a CloudFormation template that includes all the VPC resources. This template should accurately reflect the current state and configuration of the VPC.

Using the CloudFormation Console:

Open the AWS Management Console and navigate to CloudFormation.

Choose 'Create stack' and then select 'With existing resources (import resources)'.

Specifying the Template:

Upload the previously created template or specify the Amazon S3 URL where the template is stored.

Identifying the Resources:

On the 'Identify resources' page, provide the identifiers for each VPC resource you wish to import. For example, for an AWS::EC2::VPC resource, use the VPC ID as the identifier.

Creating the Stack:

Complete the stack creation process by providing stack details and reviewing the changes. This will create a change set that includes the import operation.

Executing the Change Set:

Execute the change set to import the resources into the CloudFormation stack, making them managed by CloudFormation.

Verification and Drift Detection:

After the import is complete, run drift detection to ensure the actual configuration matches the template configuration.

This approach allows the company to manage their VPC and other resources via CloudFormation without the need to recreate resources, ensuring a smooth transition to automated infrastructure management.

Reference

Creating a stack from existing resources - AWS CloudFormation (AWS Documentation).

Generating templates for existing resources - AWS CloudFormation (AWS Documentation).

Bringing existing resources into CloudFormation management (AWS Documentation).

A company runs a software-as-a-service <SaaS) application on AWS The application consists of AWS Lambda functions and an Amazon RDS for MySQL Multi-AZ database During market events the application has a much higher workload than normal Users notice slow response times during the peak periods because of many database connections The company needs to improve the scalable performance and availability of the database

Which solution meets these requirements'?

A.
Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold
A.
Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold
Answers
B.
Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function
B.
Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function
Answers
C.
Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53 weighted records
C.
Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53 weighted records
Answers
D.
Migrate the database to Amazon Aurora and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools
D.
Migrate the database to Amazon Aurora and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools
Answers
Suggested answer: D

Explanation:

Migrate to Amazon Aurora:

Amazon Aurora is a MySQL-compatible, high-performance database designed to provide higher throughput than standard MySQL. Migrating the database to Aurora will enhance the performance and scalability of the database, especially under heavy workloads.

Add Aurora Replica:

Aurora Replicas provide read scalability and improve availability. Adding an Aurora Replica allows read operations to be distributed, thereby reducing the load on the primary instance and improving response times during peak periods.

Configure Amazon RDS Proxy:

Amazon RDS Proxy acts as an intermediary between the application and the Aurora database, managing connection pools efficiently. RDS Proxy reduces the overhead of opening and closing database connections, thus maintaining fewer active connections to the database and handling surges in database connections from the Lambda functions more effectively.

This configuration reduces the database's resource usage and improves its ability to handle high volumes of concurrent connections.

Reference

AWS Database Blog on RDS Proxy (Amazon Web Services, Inc.).

AWS Compute Blog on Using RDS Proxy with Lambda (Amazon Web Services, Inc.).

A company provides a centralized Amazon EC2 application hosted in a single shared VPC The centralized application must be accessible from client applications running in the VPCs of other business units The centralized application front end is configured with a Network Load Balancer (NLB) for scalability

Up to 10 business unit VPCs will need to be connected to the shared VPC Some ot the business unit VPC CIDR blocks overlap with the shared VPC and some overlap with each other Network connectivity to the centralized application in the shared VPC should be allowed from authorized business unit VPCs only

Which network configuration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the centralized application in the shared VPC?

A.
Create an AWS Transit Gateway Attach the shared VPC and the authorized business unit VPCs to the transit gateway Create a single transit gateway route table and associate it with all of the attached VPCs Allow automatic propagation of routes from the attachments into the route table Configure VPC routing tables to send traffic to the transit gateway
A.
Create an AWS Transit Gateway Attach the shared VPC and the authorized business unit VPCs to the transit gateway Create a single transit gateway route table and associate it with all of the attached VPCs Allow automatic propagation of routes from the attachments into the route table Configure VPC routing tables to send traffic to the transit gateway
Answers
B.
Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the endpoint service console.
B.
Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the endpoint service console.
Answers
C.
Create a VPC peering connection from each business unit VPC to the shared VPC Accept the VPC peering connections from the shared VPC console Configure VPC routing tables to send traffic to the VPC peering connection
C.
Create a VPC peering connection from each business unit VPC to the shared VPC Accept the VPC peering connections from the shared VPC console Configure VPC routing tables to send traffic to the VPC peering connection
Answers
D.
Configure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC Configure VPC routing tables to send traffic to the VPN connection
D.
Configure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC Configure VPC routing tables to send traffic to the VPN connection
Answers
Suggested answer: B

Explanation:

Create VPC Endpoint Service:

In the shared VPC, create a VPC endpoint service using the Network Load Balancer (NLB) that fronts the centralized application.

Enable the option to require endpoint acceptance to control which business unit VPCs can connect to the service.

Set Up VPC Endpoints in Business Unit VPCs:

In each business unit VPC, create a VPC endpoint that points to the VPC endpoint service created in the shared VPC.

Use the service name of the endpoint service created in the shared VPC for configuration.

Accept Endpoint Requests:

From the VPC endpoint service console in the shared VPC, review and accept endpoint connection requests from authorized business unit VPCs. This ensures that only authorized VPCs can access the centralized application.

Configure Routing:

Update the route tables in each business unit VPC to direct traffic destined for the centralized application through the VPC endpoint.

This solution ensures secure, private connectivity between the business unit VPCs and the shared VPC, even if there are overlapping CIDR blocks. It leverages AWS PrivateLink and VPC endpoints to provide scalable and controlled access (AWS Documentation) (Amazon Web Services, Inc.).

Total 492 questions
Go to page: of 50