ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 90

Question list
Search
Search

List of questions

Search

Related questions











A company implements a containerized application by using Amazon Elastic Container Service

(Amazon ECS) and Amazon API Gateway. The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases. The company automates infrastructure provisioning by using AWS CloudFormalion. The company automates application deployment by using AWS CodePipeline.

A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours. Which solution will meet these requirements MOST cost-effectively?

A.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario.
A.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario.
Answers
B.
Use AWS Database Migration Service (AWS DMS). Amazon EventBridge (Amazon CloudWatch Events), and AWS Lambda to replicate the Aurora databases to a secondary AWS Region. Use DynamoDB Streams. EventBridge (CloudWatch Events), and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and In the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
B.
Use AWS Database Migration Service (AWS DMS). Amazon EventBridge (Amazon CloudWatch Events), and AWS Lambda to replicate the Aurora databases to a secondary AWS Region. Use DynamoDB Streams. EventBridge (CloudWatch Events), and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and In the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
Answers
C.
Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
C.
Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region
Answers
D.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
D.
Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
Answers
Suggested answer: C

A company is using a lift-and-shift strategy to migrate applications from several on-premises Windows servers to AWS. The Windows servers will be hosted on Amazon EC2 instances in the useast- 1 Region. The company's security policy allows the installation of migration tools on servers. The migration data must be encrypted in transit and encrypted at rest. The applications are business critical. The company needs to minimize the cutover window and minimize the downtime that results from the migration. The company wants to use Amazon CloudWatch and AWS CloudTrail for monitoring. Which solution will meet these requirements?

A.
Use AWS Application Migration Service (CloudEnsure Migration) to migrate the Windows servers to AWS. Create a Replication Settings template. Install the AWS Replication Agent on the source servers
A.
Use AWS Application Migration Service (CloudEnsure Migration) to migrate the Windows servers to AWS. Create a Replication Settings template. Install the AWS Replication Agent on the source servers
Answers
B.
Use AWS DataSync to migrate the Windows servers to AWS. Install the DataSync agent on the source servers. Configure a blueprint for the target servers. Begin the replication process.
B.
Use AWS DataSync to migrate the Windows servers to AWS. Install the DataSync agent on the source servers. Configure a blueprint for the target servers. Begin the replication process.
Answers
C.
Use AWS Server Migration Service (AWS SMS) to migrate the Windows servers to AWS. Install the SMS Connector on the source servers. Replicate the source servers to AWS. Convert the replicated volumes to AMIs to launch EC2 instances.
C.
Use AWS Server Migration Service (AWS SMS) to migrate the Windows servers to AWS. Install the SMS Connector on the source servers. Replicate the source servers to AWS. Convert the replicated volumes to AMIs to launch EC2 instances.
Answers
D.
Use AWS Migration Hub to migrate the Windows servers to AWS. Create a project in Migration Hub. Track the progress of server migration by using the built-in dashboard.
D.
Use AWS Migration Hub to migrate the Windows servers to AWS. Create a project in Migration Hub. Track the progress of server migration by using the built-in dashboard.
Answers
Suggested answer: A

A company that uses AWS Organizations is creating several new AWS accounts. The company is setting up controls to properly allocate AWS costs to business units. The company must Implement a solution to ensure that all resources include a tag that has a key of costcenter and a value from a predefined list of business units. The solution must send a notification each time a resource tag does not meet these criteri a. The solution must not prevent the creation of resources.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an 1AM policy for all actions that create AWS resources. Add a condition to the policy that aws:RequestTag/costcenter must exist and must contain a valid business unit value. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors 1AM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
A.
Create an 1AM policy for all actions that create AWS resources. Add a condition to the policy that aws:RequestTag/costcenter must exist and must contain a valid business unit value. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors 1AM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
Answers
B.
Create an 1AM policy for all actions that create AWS resources. Add a condition to the policy that awsResourceTag/costcenter must exist and must contain a valid business unit value Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors 1AM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
B.
Create an 1AM policy for all actions that create AWS resources. Add a condition to the policy that awsResourceTag/costcenter must exist and must contain a valid business unit value Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors 1AM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
Answers
C.
Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Do not select the option to prevent operations when tags are noncompliant. Create an Amazon Event8ridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
C.
Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Do not select the option to prevent operations when tags are noncompliant. Create an Amazon Event8ridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
Answers
D.
Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Select the option to prevent operations when tags are noncompliant Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
D.
Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Select the option to prevent operations when tags are noncompliant Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
Answers
Suggested answer: B

A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors. Sensors send continuous data updates, and each update Is less than 1 MB in size. The agency has a fleet of on-premises application servers. These servers receive updates from the sensors, convert the raw data into a human readable format, and write the results to an on-premises relational database server Data analysts then use simple SQL queries to monitor the data. The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks. These maintenance tasks, which include updates and patches to the application servers, cause downtime. While an application server is down, data is lost from sensors because the remaining servers cannot handle the entire workload. The agency wants a solution that optimizes operational overhead and costs. A solutions architect recommends the use of AWS loT Core to collect the sensor data. What else should the solutions architect recommend to meet these requirements?

A.
Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to .csv format, and insert it into an Amazon Aurora MySQL DB Instance. Instruct the data analysts to query the data directly from the DB Instance.
A.
Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to .csv format, and insert it into an Amazon Aurora MySQL DB Instance. Instruct the data analysts to query the data directly from the DB Instance.
Answers
B.
Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to Apache Parquet format, and save it to an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
B.
Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to Apache Parquet format, and save it to an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
Answers
C.
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to csv format and store it in an Amazon S3 bucket. Import the data Into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance
C.
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to csv format and store it in an Amazon S3 bucket. Import the data Into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance
Answers
D.
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to Apache Parquet format and store it in an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
D.
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to Apache Parquet format and store it in an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
Answers
Suggested answer: B

A company has loT sensors that monitor traffic patterns throughout a large city. The company wants to read and collect data from the sensors and perform aggregations on the data. A solutions architect designs a solution in which the loT devices are streaming to Amazon Kinesis Data Streams. Several applications are reading from the stream. However, several consumers are experiencing throttling and are periodically encountering a ReadProvisionedThroughputExceeded error.

Which actions should the solutions architect take to resolve this issue? (Select THREE.)

A.
Reshard the stream to increase the number of shards in the stream.
A.
Reshard the stream to increase the number of shards in the stream.
Answers
B.
Use the Kinesis Producer Library (KPL). Adjust the polling frequency.
B.
Use the Kinesis Producer Library (KPL). Adjust the polling frequency.
Answers
C.
Use consumers with the enhanced fan-out feature.
C.
Use consumers with the enhanced fan-out feature.
Answers
D.
Reshard the stream to reduce the number of shards in the stream.
D.
Reshard the stream to reduce the number of shards in the stream.
Answers
E.
Use an error retry and exponential backoff mechanism in the consumer logic.
E.
Use an error retry and exponential backoff mechanism in the consumer logic.
Answers
F.
Configure the stream to use dynamic partitioning.
F.
Configure the stream to use dynamic partitioning.
Answers
Suggested answer: A, C, D

A company is using multiple AWS accounts. The company has a shared services account and several other accounts (or different projects. A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect gateway by using an already-tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a role by using AWS Security Token Service (AWS STS). The team is using AWS Cloud Formation to deploy its infrastructure.

Which combination of steps will meet these requirements? (Select THREE.)

A.
Deploy the Lambda function to the project account. Update the Lambda function's 1AM role with the directconnect:* permission
A.
Deploy the Lambda function to the project account. Update the Lambda function's 1AM role with the directconnect:* permission
Answers
B.
Create a cross-account 1AM role in the shared services account that grants the Lambda function the directconnect:" permission. Add the sts:AssumeRo!e permission to the 1AM role that is associated with the Lambda function in the shared services account.
B.
Create a cross-account 1AM role in the shared services account that grants the Lambda function the directconnect:" permission. Add the sts:AssumeRo!e permission to the 1AM role that is associated with the Lambda function in the shared services account.
Answers
C.
Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the project account.
C.
Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the project account.
Answers
D.
Deploy the Lambda function that is performing the association to the shared services account.Update the Lambda function's 1AM role with the directconnect:' permission.
D.
Deploy the Lambda function that is performing the association to the shared services account.Update the Lambda function's 1AM role with the directconnect:' permission.
Answers
E.
Create a cross-account 1AM role in the shared services account that grants the sts: Assume Role permission to the Lambda function with the directconnect:" permission acting as a resource. Add the sts AssumeRole permission with this cross-account 1AM role as a resource to the 1AM role that belongs to the Lambda function in the project account.
E.
Create a cross-account 1AM role in the shared services account that grants the sts: Assume Role permission to the Lambda function with the directconnect:" permission acting as a resource. Add the sts AssumeRole permission with this cross-account 1AM role as a resource to the 1AM role that belongs to the Lambda function in the project account.
Answers
F.
Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the shared services account.
F.
Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the shared services account.
Answers
Suggested answer: B, C, E

A company is using a single AWS Region (or its ecommerce website. The website includes a web application that runs on several Amazon EC2 instances behind an Application Load Balancer (ALB). The website also includes an Amazon DynamoDB table. A custom domain name in Amazon Route 53 is linked to the ALB. The company created an SSL/TLS certificate in AWS Certificate Manager (ACM) and attached the certificate to the ALB. The company is not using a content delivery network as part of its design.

The company wants to replicate its entire application stack in a second Region to provide disaster recovery, plan for future growth, and provide improved access time to users. A solutions architect needs to implement a solution that achieves these goals and minimizes administrative overhead.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

A.
Create an AWS Cloud Formation template for the current infrastructure design. Use parameters for important system values, including Region. Use the CloudFormation template to create the new infrastructure in the second Region.
A.
Create an AWS Cloud Formation template for the current infrastructure design. Use parameters for important system values, including Region. Use the CloudFormation template to create the new infrastructure in the second Region.
Answers
B.
Use the AWS Management Console to document the existing infrastructure design in the first Region and to create the new infrastructure in the second Region.
B.
Use the AWS Management Console to document the existing infrastructure design in the first Region and to create the new infrastructure in the second Region.
Answers
C.
Update the Route 53 hosted zone record for the application to use weighted routing. Send 50% of the traffic to the ALB in each Region.
C.
Update the Route 53 hosted zone record for the application to use weighted routing. Send 50% of the traffic to the ALB in each Region.
Answers
D.
Update the Route 53 hosted zone record for the application to use latency-based routing. Send traffic to the ALB in each Region.
D.
Update the Route 53 hosted zone record for the application to use latency-based routing. Send traffic to the ALB in each Region.
Answers
E.
Update the configuration of the existing DynamoDB table by enabling DynamoDB Streams Add the second Region to create a global table.
E.
Update the configuration of the existing DynamoDB table by enabling DynamoDB Streams Add the second Region to create a global table.
Answers
F.
Create a new DynamoDB table. Enable DynamoDB Streams for the new table. Add the second Region to create a global table. Copy the data from the existing DynamoDB table to the new table as a one-time operation.
F.
Create a new DynamoDB table. Enable DynamoDB Streams for the new table. Add the second Region to create a global table. Copy the data from the existing DynamoDB table to the new table as a one-time operation.
Answers
Suggested answer: A, D, F

A company's solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure the application to reference the objects by using the Route 53 DNS name.
A.
Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure the application to reference the objects by using the Route 53 DNS name.
Answers
B.
Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in useast- 1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
B.
Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in useast- 1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
Answers
C.
Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
C.
Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
Answers
D.
Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region.
D.
Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region.
Answers
Suggested answer: D

A company is running an application in the AWS Cloud. The application uses AWS Lambda functions and Amazon Elastic Container Service (Amazon ECS) containers that run with AWS Fargate technology as its primary compute. The load on the application is irregular. The application experiences long periods of no usage, followed by sudden and significant increases and decreases in traffic. The application is write-heavy and stores data in an Amazon Aurora MySQL database. The database runs on an Amazon RDS memory optimized D8 instance that is not able to handle the load. What is the MOST cost-effective way for the company to handle the sudden and significant changes in traffic?

A.
Add additional read replicas to the database. Purchase Instance Savings Plans and RDS Reserved Instances.
A.
Add additional read replicas to the database. Purchase Instance Savings Plans and RDS Reserved Instances.
Answers
B.
Migrate the database to an Aurora multi-master DB cluster. Purchase Instance Savings Plans.
B.
Migrate the database to an Aurora multi-master DB cluster. Purchase Instance Savings Plans.
Answers
C.
Migrate the database to an Aurora global database Purchase Compute Savings Plans and RDS Reserved Instances
C.
Migrate the database to an Aurora global database Purchase Compute Savings Plans and RDS Reserved Instances
Answers
D.
Migrate the database to Aurora Serverless v1. Purchase Compute Savings Plans
D.
Migrate the database to Aurora Serverless v1. Purchase Compute Savings Plans
Answers
Suggested answer: D

A company is running its solution on AWS in a manually created VPC. The company is using AWS Cloud Formation to provision other parts of the infrastructure. According to a new requirement, the company must manage all infrastructure in an automatic way.

What should the company do to meet this new requirement with the LEAST effort?

A.
Create a new AWS Cloud Development Kit (AWS CDK) stack that stnctly provisions the existing VPC resources and configuration. Use AWS CDK to import the VPC into the stack and to manage the VPC.
A.
Create a new AWS Cloud Development Kit (AWS CDK) stack that stnctly provisions the existing VPC resources and configuration. Use AWS CDK to import the VPC into the stack and to manage the VPC.
Answers
B.
Create a CloudFormation stack set that creates the VPC. Use the stack set to import the VPC into the stack.
B.
Create a CloudFormation stack set that creates the VPC. Use the stack set to import the VPC into the stack.
Answers
C.
Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration. From the CloudFormation console, create a new stack by importing the existing resources.
C.
Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration. From the CloudFormation console, create a new stack by importing the existing resources.
Answers
D.
Create a new CloudFormation template that creates the VPC. Use the AWS Serverless Application Model {AWS SAM) CLI to import the VPC.
D.
Create a new CloudFormation template that creates the VPC. Use the AWS Serverless Application Model {AWS SAM) CLI to import the VPC.
Answers
Suggested answer: D
Total 906 questions
Go to page: of 91