ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 66

Question list
Search
Search

List of questions

Search

Related questions











A Solutions Architect is working with a company that operates a standard three-tier web application in AWS. The web and application tiers run on Amazon EC2 and the database tier runs on Amazon RDS. The company is redesigning the web and application tiers to use Amazon API Gateway and AWS Lambda, and the company intends to deploy the new application within 6 months. The IT Manager has asked the Solutions Architect to reduce costs in the interim. Which solution will be MOST cost effective while maintaining reliability?

A.
Use Spot Instances for the web tier, On-Demand Instances for the application tier, and Reserved Instances for the database tier.
A.
Use Spot Instances for the web tier, On-Demand Instances for the application tier, and Reserved Instances for the database tier.
Answers
B.
Use On-Demand Instances for the web and application tiers, and Reserved Instances for the database tier.
B.
Use On-Demand Instances for the web and application tiers, and Reserved Instances for the database tier.
Answers
C.
Use Spot Instances for the web and application tiers, and Reserved Instances for the database tier.
C.
Use Spot Instances for the web and application tiers, and Reserved Instances for the database tier.
Answers
D.
Use Reserved Instances for the web, application, and database tiers.
D.
Use Reserved Instances for the web, application, and database tiers.
Answers
Suggested answer: B

Amazon Elastic File System (EFS) provides information about the space used for an object by using the space _ used attribute of the Network File System Version 4.1 (NFSv4.1). The attribute includes the object's current metered data size and not the metadata size. Which of the following utilities will you use to measure the amount of disk that is used of a file?

A.
blkid utility
A.
blkid utility
Answers
B.
du utility
B.
du utility
Answers
C.
sfdisk utility
C.
sfdisk utility
Answers
D.
pydf utility
D.
pydf utility
Answers
Suggested answer: B

Explanation:

Amazon EFS reports file system sizes and sizes of objects within a file system. Using the NFSv4.1 space _ used attribute for measuring the space used for an object, it reports only the object's current metered data size and not the metadata size.

There are two utilities available for measuring disk usage of a file, the du and stat utilities.

Reference:

https://docs.aws.amazon.com/efs/latest/ug/metered-sizes.html

A company is launching a web-based application in multiple regions around the world. The application consists of both static content stored in a private Amazon S3 bucket and dynamic content hosted in Amazon ECS containers content behind an Application Load Balancer (ALB). The company requires that the static and dynamic application content be accessible through Amazon CloudFront only. Which combination of steps should a solutions architect recommend to restrict direct content access to CloudFront? (Choose three.)

A.
Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
A.
Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
Answers
B.
Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
B.
Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
Answers
C.
Configure CloudFront to add a custom header to origin requests.
C.
Configure CloudFront to add a custom header to origin requests.
Answers
D.
Configure the ALB to add a custom header to HTTP requests.
D.
Configure the ALB to add a custom header to HTTP requests.
Answers
E.
Update the S3 bucket ACL to allow access from the CloudFront distribution only.
E.
Update the S3 bucket ACL to allow access from the CloudFront distribution only.
Answers
F.
Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only.
F.
Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only.
Answers
Suggested answer: A, D, F

Which of the following cannot be used to manage Amazon ElastiCache and perform administrative tasks?

A.
AWS software development kits (SDKs)
A.
AWS software development kits (SDKs)
Answers
B.
Amazon S3
B.
Amazon S3
Answers
C.
ElastiCache command line interface (CLI)
C.
ElastiCache command line interface (CLI)
Answers
D.
AWS CloudWatch
D.
AWS CloudWatch
Answers
Suggested answer: D

Explanation:

CloudWatch is a monitoring tool and doesn't give users access to manage Amazon ElastiCache.

Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.Managing.html

A company is serving files to its customer through an SFTP server that is accessible over the Internet. The SFTP server is running on a single Amazon EC2 instance with an Elastic IP address attached. Customers connect to the SFTP server through its Elastic IP address and use SSH for authentication. The EC2 instance also has an attached security group that allows access from all customer IP addresses. A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the disruption to customers who access files. The solution must not change the way customers connect.

Which solution will meet these requirements?

A.
Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a publicly accessible endpoint. Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
A.
Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a publicly accessible endpoint. Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
Answers
B.
Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a VPC-hosted, Internet-facing endpoint. Associate the SFTP Elastic IP address with the new endpoint. Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
B.
Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a VPC-hosted, Internet-facing endpoint. Associate the SFTP Elastic IP address with the new endpoint. Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
Answers
C.
Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server. Specify the EFS file system as a mount in the task definition. Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service. When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP address with the NLB. Sync all files from the SFTP server to the S3 bucket.
C.
Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server. Specify the EFS file system as a mount in the task definition. Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service. When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP address with the NLB. Sync all files from the SFTP server to the S3 bucket.
Answers
D.
Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached.Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS volume. Configure the Auto Scaling group to automatically add instances behind the NLB. Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all files from the SFTP server to the new multi-attach EBS volume.
D.
Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached.Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS volume. Configure the Auto Scaling group to automatically add instances behind the NLB. Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all files from the SFTP server to the new multi-attach EBS volume.
Answers
Suggested answer: B


ABC has three separate departments and each department has their own AWS accounts. The HR department has created a file sharing site where all the on roll employees' data is uploaded. The Admin department uploads data about the employee presence in the office to their DB hosted in the VPC. The Finance department needs to access data from the HR department to know the on roll employees to calculate the salary based on the number of days that an employee is present in the office.

How can ABC setup this scenario?

A.
It is not possible to configure VPC peering since each department has a separate AWS account.
A.
It is not possible to configure VPC peering since each department has a separate AWS account.
Answers
B.
Setup VPC peering for the VPCs of Admin and Finance.
B.
Setup VPC peering for the VPCs of Admin and Finance.
Answers
C.
Setup VPC peering for the VPCs of Finance and HR as well as between the VPCs of Finance and Admin.
C.
Setup VPC peering for the VPCs of Finance and HR as well as between the VPCs of Finance and Admin.
Answers
D.
Setup VPC peering for the VPCs of Admin and HR
D.
Setup VPC peering for the VPCs of Admin and HR
Answers
Suggested answer: C

Explanation:

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. A VPC peering connection allows the user to route traffic between the peer VPCs using private IP addresses as if they are a part of the same network. This is helpful when one VPC from the same or different AWS account wants to connect with resources of the other VPC.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-full-access.html#three-vpcs-fullaccess.

An organization is having a VPC for the HR department, and another VPC for the Admin department. The HR department requires access to all the instances running in the Admin VPC while the Admin department requires access to all the resources in the HR department.

How can the organization setup this scenario?

A.
Setup VPC peering between the VPCs of Admin and HR.
A.
Setup VPC peering between the VPCs of Admin and HR.
Answers
B.
Setup ACL with both VPCs which will allow traffic from the CIDR of the other VPC.
B.
Setup ACL with both VPCs which will allow traffic from the CIDR of the other VPC.
Answers
C.
Setup the security group with each VPC which allows traffic from the CIDR of another VPC.
C.
Setup the security group with each VPC which allows traffic from the CIDR of another VPC.
Answers
D.
It is not possible to connect resources of one VPC from another VPC.
D.
It is not possible to connect resources of one VPC from another VPC.
Answers
Suggested answer: A

Explanation:

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. A VPC peering connection allows the user to route traffic between the peer VPCs using private IP addresses as if they are a part of the same network. This is helpful when one VPC from the same or different AWS account wants to connect with resources of the other VPC.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

A company wants to run a serverless application on AWS. The company plans to provision its application in Docker containers running in an Amazon ECS cluster. The application requires a MySQL database and the company plans to use Amazon RDS. The company has documents that need to be accessed frequently for the first 3 months, and rarely after that. The document must be retained for 7 years.

What is the MOST cost-effective solution to meet these requirements?

A.
Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using Spot Instances. Store the documents in an encrypted EBS volume, and create a cron job to delete the documents after 7 years.
A.
Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using Spot Instances. Store the documents in an encrypted EBS volume, and create a cron job to delete the documents after 7 years.
Answers
B.
Create an ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using Reserved Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents from Amazon S3 Glacier that are more than 7 years old.
B.
Create an ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using Reserved Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents from Amazon S3 Glacier that are more than 7 years old.
Answers
C.
Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
C.
Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
Answers
D.
Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents in Amazon S3 Glacier after 7 years.
D.
Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents in Amazon S3 Glacier after 7 years.
Answers
Suggested answer: B

A user is trying to send custom metrics to CloudWatch using the PutMetricData APIs. Which of the below mentioned points should the user needs to take care while sending the data to CloudWatch?

A.
The size of a request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests
A.
The size of a request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests
Answers
B.
The size of a request is limited to 16KB for HTTP GET requests and 80KB for HTTP POST requests
B.
The size of a request is limited to 16KB for HTTP GET requests and 80KB for HTTP POST requests
Answers
C.
The size of a request is limited to 128KB for HTTP GET requests and 64KB for HTTP POST requests
C.
The size of a request is limited to 128KB for HTTP GET requests and 64KB for HTTP POST requests
Answers
D.
The size of a request is limited to 40KB for HTTP GET requests and 8KB for HTTP POST requests
D.
The size of a request is limited to 40KB for HTTP GET requests and 8KB for HTTP POST requests
Answers
Suggested answer: A

Explanation:

With AWS CloudWatch, the user can publish data points for a metric that share not only the same time stamp, but also the same namespace and dimensions. CloudWatch can accept multiple data points in the same PutMetricData call with the same time stamp. The only thing that the user needs to take care of is that the size of a PutMetricData request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests.

Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html

A financial services company has an on-premises environment that ingests market data feeds from stock exchanges, transforms the data, and sends the data to an internal Apache Kafka cluster. Management wants to leverage AWS services to build a scalable and near-real-time solution with consistent network performance to provide stock market data to a web application. Which steps should a solutions architect take to build the solution? (Choose three.)

A.
Establish an AWS Direct Connect connection from the on-premises data center to AWS.
A.
Establish an AWS Direct Connect connection from the on-premises data center to AWS.
Answers
B.
Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
B.
Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
Answers
C.
Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
C.
Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
Answers
D.
Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
D.
Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
Answers
E.
Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
E.
Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
Answers
F.
Establish a Site-to-Site VPN from the on-premises data center to AWS.
F.
Establish a Site-to-Site VPN from the on-premises data center to AWS.
Answers
Suggested answer: A, D, E
Total 906 questions
Go to page: of 91