ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 31

Question list
Search
Search

List of questions

Search

Related questions











A company is planning a one-time migration of an on-premises MySQL database to Amazon Aurora MySQL in the us-east-1 Region. The company's current internet connection has limited bandwidth.

The on-premises MySQL database is 60 TB in size The company estimates that it will take a month to transfer the data to AWS over the current internet connection.

The company needs a migration solution that will migrate the database more quickly Which solution will migrate the database in the LEAST amount of time?

A.
Request a 1 Gbps AWS Direct Connect connection between the on-premises data center and AWS Use AWS Database Migration Service (AWS DMS) to migrate the on-premises MySQL database to Aurora MySQL.
A.
Request a 1 Gbps AWS Direct Connect connection between the on-premises data center and AWS Use AWS Database Migration Service (AWS DMS) to migrate the on-premises MySQL database to Aurora MySQL.
Answers
B.
Use AWS DataSync with the current internet connection to accelerate the data transfer between the on-premises data center and AWS Use AWS Application Migration Service to migrate the onpremises MySQL database to Aurora MySQL.
B.
Use AWS DataSync with the current internet connection to accelerate the data transfer between the on-premises data center and AWS Use AWS Application Migration Service to migrate the onpremises MySQL database to Aurora MySQL.
Answers
C.
Order an AWS Snowball Edge Device Load the data into an Amazon S3 bucket by using the S3 interface Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to Aurora MySQL
C.
Order an AWS Snowball Edge Device Load the data into an Amazon S3 bucket by using the S3 interface Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to Aurora MySQL
Answers
D.
Order an AWS Snowball Device Load the data into an Amazon S3 bucket by using the S3 Adapter for Snowball Use AWS Application Migration Service to migrate the data from Amazon S3 to Aurora MySQL.
D.
Order an AWS Snowball Device Load the data into an Amazon S3 bucket by using the S3 Adapter for Snowball Use AWS Application Migration Service to migrate the data from Amazon S3 to Aurora MySQL.
Answers
Suggested answer: C

A company uses AWS CloudFormation to deploy applications within multiple VPCs that are all attached to a transit gateway Each VPC that sends traffic to the public internet must send the traffic through a shared services VPC Each subnet within a VPC uses the default VPC route table and the traffic is routed to the transit gateway The transit gateway uses its default route table for any VPC attachment A security audit reveals that an Amazon EC2 instance that is deployed within a VPC can communicate with an EC2 instance that is deployed in any of the company's other VPCs A solutions architect needs to limit the traffic between the VPCs. Each VPC must be able to communicate only with a predefined, limited set of authorized VPCs.

What should the solutions architect do to meet these requirements'?

A.
Update the network ACL of each subnet within a VPC to allow outbound traffic only to the authorized VPCs Remove all deny rules except the default deny rule.
A.
Update the network ACL of each subnet within a VPC to allow outbound traffic only to the authorized VPCs Remove all deny rules except the default deny rule.
Answers
B.
Update all the security groups that are used within a VPC to deny outbound traffic to security groups that are used within the unauthorized VPCs
B.
Update all the security groups that are used within a VPC to deny outbound traffic to security groups that are used within the unauthorized VPCs
Answers
C.
Create a dedicated transit gateway route table for each VPC attachment. Route traffic only to the authorized VPCs.
C.
Create a dedicated transit gateway route table for each VPC attachment. Route traffic only to the authorized VPCs.
Answers
D.
Update the mam route table of each VPC to route traffic only to the authorized VPCs through the transit gateway
D.
Update the mam route table of each VPC to route traffic only to the authorized VPCs through the transit gateway
Answers
Suggested answer: C

Explanation:

You can segment your network by creating multiple route tables in an AWS Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to create isolated networks inside an AWS Transit Gateway similar to virtual routing and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a default route table. The use of multiple route tables is optional.

A company has implemented an ordering system using an event-driven architecture. During initial testing, the system stopped processing orders. Further log analysis revealed that one order message in an Amazon Simple Queue Service (Amazon SQS) standard queue was causing an error on the backend and blocking all subsequent order messages The visibility timeout of the queue is set to 30 seconds, and the backend processing timeout is set to 10 seconds. A solutions architect needs to analyze faulty order messages and ensure that the system continues to process subsequent messages.

Which step should the solutions architect take to meet these requirements?

A.
Increase the backend processing timeout to 30 seconds to match the visibility timeout.
A.
Increase the backend processing timeout to 30 seconds to match the visibility timeout.
Answers
B.
Reduce the visibility timeout of the queue to automatically remove the faulty message.
B.
Reduce the visibility timeout of the queue to automatically remove the faulty message.
Answers
C.
Configure a new SQS FIFO queue as a dead-letter queue to isolate the faulty messages.
C.
Configure a new SQS FIFO queue as a dead-letter queue to isolate the faulty messages.
Answers
D.
Configure a new SQS standard queue as a dead-letter queue to isolate the faulty messages.
D.
Configure a new SQS standard queue as a dead-letter queue to isolate the faulty messages.
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-deadletter-queues.html

A company has an loT platform that runs in an on-premises environment. The platform consists of a server that connects to loT devices by using the MQTT protocol. The platform collects telemetry data from the devices at least once every 5 minutes The platform also stores device metadata in a MongoDB cluster An application that is installed on an on-premises machine runs periodic jobs to aggregate and transform the telemetry and device metadata The application creates reports that users view by using another web application that runs on the same on-premises machine The periodic jobs take 120-600 seconds to run However, the web application is always running.

The company is moving the platform to AWS and must reduce the operational overhead of the stack.

Which combination of steps will meet these requirements with the LEAST operational overhead?

(Select THREE.)

A.
Use AWS Lambda functions to connect to the loT devices
A.
Use AWS Lambda functions to connect to the loT devices
Answers
B.
Configure the loT devices to publish to AWS loT Core
B.
Configure the loT devices to publish to AWS loT Core
Answers
C.
Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
C.
Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
Answers
D.
Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
D.
Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
Answers
E.
Use AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin to serve the reports
E.
Use AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin to serve the reports
Answers
F.
Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports
F.
Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports
Answers
Suggested answer: B, D, E

Explanation:

https://aws.amazon.com/step-functions/use-cases/

A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto

Scaling Group The VPC architecture spans two Availability Zones (AZ) with a subnet in each that the Auto Scaling group is targeting. The VPC is connected to an on-premises environment and connectivity cannot be interrupted The maximum size of the Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:

VPCCIDR 10 0 0 0/23 AZ1 subnet CIDR: 10 0 0 0724 AZ2 subnet CIDR: 10.0.1 0724 Since deployment, a third AZ has become available in the Region The solutions architect wants to adopt the new AZ without adding additional IPv4 address space and without service downtime.

Which solution will meet these requirements?

A.
Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1 subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.
A.
Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1 subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.
Answers
B.
Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnet using hall the address space. Update the Auto Scaling group to use this new subnet. Repeat this for the second AZ.Define a new subnet in AZ3: then update the Auto Scaling group to target all three new subnets
B.
Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnet using hall the address space. Update the Auto Scaling group to use this new subnet. Repeat this for the second AZ.Define a new subnet in AZ3: then update the Auto Scaling group to target all three new subnets
Answers
C.
Create a new VPC with the same IPv4 address space and define three subnets, with one for each AZ Update the existing Auto Scaling group to target the new subnets in the new VPC
C.
Create a new VPC with the same IPv4 address space and define three subnets, with one for each AZ Update the existing Auto Scaling group to target the new subnets in the new VPC
Answers
D.
Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to have halt the previous address space Adjust the Auto Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Seating group to use the AZ1 subnet only. Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets
D.
Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to have halt the previous address space Adjust the Auto Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Seating group to use the AZ1 subnet only. Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets
Answers
Suggested answer: A

Explanation:

https://repost.aws/knowledge-center/vpc-ip-address-range

A company hosts a web application on AWS in the us-east-1 Region The application servers are distributed across three Availability Zones behind an Application Load Balancer. The database is hosted in a MySQL database on an Amazon EC2 instance A solutions architect needs to design a Cross-Region data recovery solution using AWS services with an RTO of less than 5 minutes and an

RPO of less than 1 minute. The solutions architect is deploying application servers in us-west-2, and has configured Amazon Route 53 hearth checks and DNS failover to us-west-2 Which additional step should the solutions architect take?

A.
Migrate the database to an Amazon RDS tor MySQL instance with a cross-Region read replica in uswest-2
A.
Migrate the database to an Amazon RDS tor MySQL instance with a cross-Region read replica in uswest-2
Answers
B.
Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2
B.
Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2
Answers
C.
Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
C.
Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
Answers
D.
Create a MySQL standby database on an Amazon EC2 instance in us-west-2
D.
Create a MySQL standby database on an Amazon EC2 instance in us-west-2
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/rds/aurora/global-database/

A company implements a containerized application by using Amazon Elastic Container Service (Amazon ECS) and Amazon API Gateway. The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases The company automates infrastructure provisioning by using AWS CloudFormation The company automates application deployment by using AWS CodePipeline.

A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours.

Which solution will meet these requirements MOST cost-effectively'?

A.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional Endpoint Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario
A.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional Endpoint Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario
Answers
B.
Use AWS Database Migration Service (AWS DMS). Amazon EventBridge. and AWS Lambda to replicate the Aurora databases to a secondary AWS Region Use DynamoDB Streams EventBridge, and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional Endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
B.
Use AWS Database Migration Service (AWS DMS). Amazon EventBridge. and AWS Lambda to replicate the Aurora databases to a secondary AWS Region Use DynamoDB Streams EventBridge, and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional Endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
Answers
C.
Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
C.
Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
Answers
D.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
D.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/database/cost-effective-disaster-recovery-for-amazon-auroradatabases-using-aws-backup/

A company plans to deploy a new private intranet service on Amazon EC2 instances inside a VPC. An AWS Site-to-Site VPN connects the VPC to the company's on-premises network. The new service must communicate with existing on-premises services The on-premises services are accessible through the use of hostnames that reside in the company example DNS zone This DNS zone is wholly hosted on premises and is available only on the company's private network.

A solutions architect must ensure that the new service can resolve hostnames on the company example domain to integrate with existing services.

Which solution meets these requirements?

A.
Create an empty private zone in Amazon Route 53 for company example Add an additional NS record to the company's on-premises company example zone that points to the authoritative name servers for the new private zone in Route 53
A.
Create an empty private zone in Amazon Route 53 for company example Add an additional NS record to the company's on-premises company example zone that points to the authoritative name servers for the new private zone in Route 53
Answers
B.
Turn on DNS hostnames for the VPC Configure a new outbound endpoint with Amazon Route 53 Resolver. Create a Resolver rule to forward requests for company example to the on-premises name servers
B.
Turn on DNS hostnames for the VPC Configure a new outbound endpoint with Amazon Route 53 Resolver. Create a Resolver rule to forward requests for company example to the on-premises name servers
Answers
C.
Turn on DNS hostnames for the VPC Configure a new inbound resolver endpoint with Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for company example to the new resolver.
C.
Turn on DNS hostnames for the VPC Configure a new inbound resolver endpoint with Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for company example to the new resolver.
Answers
D.
Use AWS Systems Manager to configure a run document that will install a hosts file that contains any required hostnames. Use an Amazon EventBndge rule to run the document when an instance is entering the running state.
D.
Use AWS Systems Manager to configure a run document that will install a hosts file that contains any required hostnames. Use an Amazon EventBndge rule to run the document when an instance is entering the running state.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html


A company is using Amazon API Gateway to deploy a private REST API that will provide access to sensitive data. The API must be accessible only from an application that is deployed in a VPC. The company deploys the API successfully. However, the API is not accessible from an Amazon EC2 instance that is deployed in the VPC.

Which solution will provide connectivity between the EC2 instance and the API?

A.
Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows apigateway:* actions. Disable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC. Use the VPC endpoint's DNS name to access the API.
A.
Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows apigateway:* actions. Disable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC. Use the VPC endpoint's DNS name to access the API.
Answers
B.
Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows the execute-api:lnvoke action. Enable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC endpoint. Use the API endpoint's DNS names to access the API. Most Voted
B.
Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that allows the execute-api:lnvoke action. Enable private DNS naming for the VPC endpoint. Configure an API resource policy that allows access from the VPC endpoint. Use the API endpoint's DNS names to access the API. Most Voted
Answers
C.
Create a Network Load Balancer (NLB) and a VPC link. Configure private integration between API Gateway and the NLB. Use the API endpoint's DNS names to access the API.
C.
Create a Network Load Balancer (NLB) and a VPC link. Configure private integration between API Gateway and the NLB. Use the API endpoint's DNS names to access the API.
Answers
D.
Create an Application Load Balancer (ALB) and a VPC Link. Configure private integration between API Gateway and the ALB. Use the ALB endpoint's DNS name to access the API.
D.
Create an Application Load Balancer (ALB) and a VPC Link. Configure private integration between API Gateway and the ALB. Use the ALB endpoint's DNS name to access the API.
Answers
Suggested answer: B

Explanation:

According to the AWS documentation1, to access a private API from a VPC, you need to do the following:

Create an interface VPC endpoint for API Gateway in your VPC. This creates a private connection between your VPC and API Gateway.

Attach an endpoint policy to the VPC endpoint that allows the execute-api:lnvoke action for your private API. This grants permission to invoke your API from the VPC.

Enable private DNS naming for the VPC endpoint. This allows you to use the same DNS names for your private APIs as you would for public APIs.

Configure a resource policy for your private API that allows access from the VPC endpoint. This controls who can access your API and under what conditions.

Use the API endpoint's DNS names to access the API from your VPC. For example, https://api-id.execute-api.region.amazonaws.com/stage.

A company has developed a mobile game. The backend for the game runs on several virtual machines located in an on-premises data center. The business logic is exposed using a REST API with multiple functions. Player session data is stored in central file storage. Backend services use different API keys for throttling and to distinguish between live and test traffic.

The load on the game backend varies throughout the day. During peak hours, the server capacity is not sufficient. There are also latency issues when fetching player session data. Management has asked a solutions architect to present a cloud architecture that can handle the game's varying load and provide low-latency data access. The API model should not be changed.

Which solution meets these requirements?

A.
Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store player session data in Amazon Aurora Serverless.
A.
Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store player session data in Amazon Aurora Serverless.
Answers
B.
Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on-demand capacity.
B.
Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on-demand capacity.
Answers
C.
Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on- demand capacity.
C.
Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store player session data in Amazon DynamoDB with on- demand capacity.
Answers
D.
Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store player session data in Amazon Aurora Serverless.
D.
Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store player session data in Amazon Aurora Serverless.
Answers
Suggested answer: C
Total 492 questions
Go to page: of 50