ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 91

Question list
Search
Search

Related questions











A company owns a chain of travel agencies and is running an application in the AWS Cloud. Company employees use the application to search (or Information about travel destinations. Destination content is updated four times each year. Two fixed Amazon EC2 instances serve the application. The company uses an Amazon Route 53 public hosted zone with a multivalue record of travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB as its primary data store. The company uses a self-hosted Redis instance as a caching solution. During content updates, the load on the EC2 instances and the caching solution increases drastically. This increased load has led to downtime on several occasions. A solutions architect must update the application so that the application is highly available and can handle the load that is generated by the content updates. Which solution will meet these requirements?

A.
Set up DynamoDB Accelerator (DAX} as in-memory cache. Update the application to use DAX.Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduled scaling for the EC2 instances before the content updates.
A.
Set up DynamoDB Accelerator (DAX} as in-memory cache. Update the application to use DAX.Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduled scaling for the EC2 instances before the content updates.
Answers
B.
Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distnbution. and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution's DNS alias Manually scale up EC2 instances before the content updates
B.
Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distnbution. and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution's DNS alias Manually scale up EC2 instances before the content updates
Answers
C.
Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduled scaling for the application before the content updates.
C.
Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduled scaling for the application before the content updates.
Answers
D.
Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX.Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content updates.
D.
Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX.Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution's DNS alias. Manually scale up EC2 instances before the content updates.
Answers
Suggested answer: B

A company plans to deploy a new private Intranet service on Amazon EC2 instances inside a VPC. An AWS Site-to-Site VPN connects the VPC to the company's ort-premises network. The new service must communicate with existing on- premises services. The on-premises services are accessible through the use of hostnames that reside in the company example DNS zone. This DNS zone is wholly hosted on premises and is available only on the company's private network. A solutions architect must ensure that the new service can resolve hostnames on the company.example domain to integrate with existing services. Which solution meets these requirements?

A.
Create an empty private zone in Amazon Route 53 for company.example. Add an additional NS record to the company's on-premises company.example zone that points to the authoritative name servers for the new private zone in Route 53
A.
Create an empty private zone in Amazon Route 53 for company.example. Add an additional NS record to the company's on-premises company.example zone that points to the authoritative name servers for the new private zone in Route 53
Answers
B.
Turn on DNS hostnames for the VPC. Configure a new outbound endpoint with Amazon Route 53 Resolver. Create a Resolver rule to forward requests for company.example to the on-premises name servers.
B.
Turn on DNS hostnames for the VPC. Configure a new outbound endpoint with Amazon Route 53 Resolver. Create a Resolver rule to forward requests for company.example to the on-premises name servers.
Answers
C.
Turn on DNS hostnames for the VPC. Configure a new inbound resolver endpoint with Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for company.example to the new resolver.
C.
Turn on DNS hostnames for the VPC. Configure a new inbound resolver endpoint with Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for company.example to the new resolver.
Answers
D.
Use AWS Systems Manager to configure a run document that will install a hosts file that contains any required hostnames. Use an Amazon Event8ndge (Amazon CloudWatch Events) rule lo run the document when an instance is entering the running state.
D.
Use AWS Systems Manager to configure a run document that will install a hosts file that contains any required hostnames. Use an Amazon Event8ndge (Amazon CloudWatch Events) rule lo run the document when an instance is entering the running state.
Answers
Suggested answer: C

A company that develops consumer electronics with offices in Europe and Asia has 60 TB of software images stored on premises in Europe. The company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region. New software images are created daily and must be encrypted in transit. The company needs a solution that does not require custom development to automatically transfer all existing and new software images to Amazon S3. What is the next step in the transfer process?

A.
Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket.
A.
Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket.
Answers
B.
Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration.
B.
Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration.
Answers
C.
Use an AWS Snowball device to transfer the images with the S3 bucket as the target.
C.
Use an AWS Snowball device to transfer the images with the S3 bucket as the target.
Answers
D.
Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload.
D.
Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload.
Answers
Suggested answer: A

A company processes environmental dat a. The company has set up sensors to provide a continuous stream of data from different areas in a city. The data is available in JSON format. The company wants to use an AWS solution to send the data to a database that does not require fixed schemas for storage. The data must be sent in real time. Which solution will meet these requirements?

A.
Use Amazon Kinesis Data Firehose to send the data to Amazon Redshift.
A.
Use Amazon Kinesis Data Firehose to send the data to Amazon Redshift.
Answers
B.
Use Amazon Kinesis Data Streams to send the data to Amazon DynamoDB
B.
Use Amazon Kinesis Data Streams to send the data to Amazon DynamoDB
Answers
C.
Use Amazon Managed Streaming for Apache Kafka {Amazon MSK) to send the data to Amazon Aurora.
C.
Use Amazon Managed Streaming for Apache Kafka {Amazon MSK) to send the data to Amazon Aurora.
Answers
D.
Use Amazon Kinesis Data Firehose to send the data to Amazon Keyspaces (for Apache Cassandra).
D.
Use Amazon Kinesis Data Firehose to send the data to Amazon Keyspaces (for Apache Cassandra).
Answers
Suggested answer: B

A company is running a three-tier web application in an on-premises data center. The frontend is served by an Apache web server, the middle tier is a monolithic Java application, and the storage tier is a PostgreSOL database. During a recent marketing promotion, customers could not place orders through the application because the application crashed An analysis showed that all three tiers were overloaded. The application became unresponsive, and the database reached its capacity limit because of read operations. The company already has several similar promotions scheduled in the near future. A solutions architect must develop a plan for migration to AWS to resolve these issues. The solution must maximize scalability and must minimize operational effort. Which combination of steps will meet these requirements? (Select THREE.)

A.
Refactor the frontend so that static assets can be hosted on Amazon S3. Use Amazon CloudFront to serve the frontend to customers. Connect the frontend to the Java application.
A.
Refactor the frontend so that static assets can be hosted on Amazon S3. Use Amazon CloudFront to serve the frontend to customers. Connect the frontend to the Java application.
Answers
B.
Rehost the Apache web server of the frontend on Amazon EC2 instances that are in an Auto Scaling group. Use a load balancer in front of the Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) to host the static assets that the Apache web server needs.
B.
Rehost the Apache web server of the frontend on Amazon EC2 instances that are in an Auto Scaling group. Use a load balancer in front of the Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) to host the static assets that the Apache web server needs.
Answers
C.
Rehost the Java application in an AWS Elastic Beanstalk environment that includes auto scaling.
C.
Rehost the Java application in an AWS Elastic Beanstalk environment that includes auto scaling.
Answers
D.
Refactor the Java application. Develop a Docker container to run the Java application. Use AWS Fargate to host the container.
D.
Refactor the Java application. Develop a Docker container to run the Java application. Use AWS Fargate to host the container.
Answers
E.
Use AWS Database Migration Service (AWS DMS) to replatform the PostgreSQL database to an Amazon Aurora PostgreSQL database. Use Aurora Auto Scaling for read replicas.
E.
Use AWS Database Migration Service (AWS DMS) to replatform the PostgreSQL database to an Amazon Aurora PostgreSQL database. Use Aurora Auto Scaling for read replicas.
Answers
F.
Rehost the PostgreSQL database on an Amazon EC2 instance that has twice as much memory as the on-premises server.
F.
Rehost the PostgreSQL database on an Amazon EC2 instance that has twice as much memory as the on-premises server.
Answers
Suggested answer: B, C, F

A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances. To meet regulatory and business requirements, the company must make the following changes for data backups:

• Backups must be retained based on custom daily, weekly, and monthly requirements.

• Backups must be replicated to at least one other AWS Region immediately after capture.

• The backup solution must provide a single source of backup status across the AWS environment.

• The backup solution must send immediate notifications upon failure of any resource backup.

Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Select THREE.)

A.
Create an AWS Backup plan with a backup rule for each of the retention requirements.
A.
Create an AWS Backup plan with a backup rule for each of the retention requirements.
Answers
B.
Configure an AWS Backup plan to copy backups to another Region.
B.
Configure an AWS Backup plan to copy backups to another Region.
Answers
C.
Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
C.
Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
Answers
D.
Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETEO.
D.
Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETEO.
Answers
E.
Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
E.
Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
Answers
F.
Set up RDS snapshots on each database.
F.
Set up RDS snapshots on each database.
Answers
Suggested answer: B, D, E

Explanation:


Total 906 questions
Go to page: of 91