ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











A company has created an OU in AWS Organizations for each of its engineering teams. Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts. A solutions architect must design a solution so that each OU can view a breakdown of usage costs across its AWS accounts. Which solution meets these requirements?

A.
Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
A.
Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
Answers
B.
Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
B.
Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
Answers
C.
Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
C.
Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
Answers
D.
Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager. Allow each team to visualize the CUR through Systems Manager OpsCenter dashboards.
D.
Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager. Allow each team to visualize the CUR through Systems Manager OpsCenter dashboards.
Answers
Suggested answer: B

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/quicksight-cost-usage-report/

An organization has setup RDS with VPC. The organization wants RDS to be accessible from the internet. Which of the below mentioned configurations is not required in this scenario?

A.
The organization must enable the parameter in the console which makes the RDS instance publicly accessible.
A.
The organization must enable the parameter in the console which makes the RDS instance publicly accessible.
Answers
B.
The organization must allow access from the internet in the RDS VPC security group,
B.
The organization must allow access from the internet in the RDS VPC security group,
Answers
C.
The organization must setup RDS with the subnet group which has an external IP.
C.
The organization must setup RDS with the subnet group which has an external IP.
Answers
D.
The organization must enable the VPC attributes DNS hostnames and DNS resolution.
D.
The organization must enable the VPC attributes DNS hostnames and DNS resolution.
Answers
Suggested answer: C

Explanation:

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources, such as RDS into a virtual network that the user has defined. Subnets are segments of a VPC's IP address range that the user can designate to a group of VPC resources based on security and operational needs. A DB subnet group is a collection of subnets (generally private) that the user can create in a VPC and which the user assigns to the RDS DB instances. A DB subnet group allows the user to specify a particular VPC when creating DB instances. If the RDS instance is required to be accessible from the internet:

The organization must setup that the RDS instance is enabled with the VPC attributes, DNS hostnames and DNS resolution. The organization must enable the parameter in the console which makes the RDS instance publicly accessible. The organization must allow access from the internet in the RDS VPC security group.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html

A user is creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?

A.
Its incremental
A.
Its incremental
Answers
B.
It is a point in time backup of the EBS volume
B.
It is a point in time backup of the EBS volume
Answers
C.
It can be used to create an AMI
C.
It can be used to create an AMI
Answers
D.
It is stored in the same AZ as the volume
D.
It is stored in the same AZ as the volume
Answers
Suggested answer: D

Explanation:

The EBS snapshots are a point in time backup of the EBS volume. It is an incremental snapshot, but is always specific to the region and never specific to a single AZ. Hence the statement "It is stored in the same AZ as the volume" is incorrect.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

You want to use Amazon Redshift and you are planning to deploy dw1.8xlarge nodes. What is the minimum amount of nodes that you need to deploy with this kind of configuration?

A.
1
A.
1
Answers
B.
4
B.
4
Answers
C.
3
C.
3
Answers
D.
2
D.
2
Answers
Suggested answer: D

Explanation:

For a single-node configuration in Amazon Redshift, the only option available is the smallest of the two options. The 8XL extra-large nodes are only available in a multi-node configuration.

Reference: http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html

A company is operating a large customer service call center, and stores and processes call recordings with a custom application. Approximately 2% of the call recordings are transcribed by an offshore team for quality assurance purposes. These recordings take up to 72 hours to be transcribed. The recordings are stored on an NFS share before they are archived to an offsite location after 90 days. The company uses Linux servers for processing the call recordings and managing the transcription queue. There is also a web application for the quality assurance staff to review and score call recordings. The company plans to migrate the system to AWS to reduce storage costs and the time required to transcribe calls. Which set of actions should be taken to meet the company’s objectives?

A.
Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Transcribe.Use Amazon S3, Amazon API Gateway, and Lambda to host the review and scoring application.
A.
Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Transcribe.Use Amazon S3, Amazon API Gateway, and Lambda to host the review and scoring application.
Answers
B.
Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Mechanical Turk. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application.
B.
Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Mechanical Turk. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application.
Answers
C.
Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application. Upload the call recordings to this application from the call center and store them on an Amazon EFS mount point. Use AWS Backup to archive the call recordings after 90 days. Transcribe the call recordings with Amazon Transcribe.
C.
Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application. Upload the call recordings to this application from the call center and store them on an Amazon EFS mount point. Use AWS Backup to archive the call recordings after 90 days. Transcribe the call recordings with Amazon Transcribe.
Answers
D.
Upload the call recordings to Amazon S3 from the call center and put the object key in an Amazon SQS queue. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use Amazon EC2 instances in an Auto Scaling group to send the recordings to Amazon Mechanical Turk for transcription. Use the number of objects in the queue as the scaling metric. Use Amazon S3, Amazon API Gateway, and AWS Lambda to host the review and scoring application.
D.
Upload the call recordings to Amazon S3 from the call center and put the object key in an Amazon SQS queue. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use Amazon EC2 instances in an Auto Scaling group to send the recordings to Amazon Mechanical Turk for transcription. Use the number of objects in the queue as the scaling metric. Use Amazon S3, Amazon API Gateway, and AWS Lambda to host the review and scoring application.
Answers
Suggested answer: A

A company is developing a messaging application that is based on a microservices architecture. A separate team develops each microservice by using Amazon Elastic Container Service (Amazon ECS). The teams deploy the microservices multiple times daily by using AWS CloudFormation and AWS CodePipeline.

The application recently grew in size and complexity. Each service operates correctly on its own during development, but each service produces error messages when it has to interact with other services in production. A solutions architect must improve the application’s availability.

Which solution will meet these requirements with the LEAST amount of operational overhead?

A.
Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Test each service after deployment to make sure that no error messages occur.
A.
Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Test each service after deployment to make sure that no error messages occur.
Answers
B.
Add an AWS::CodeDeployBlueGreen Transform section and Hook section to the template to enable blue/green deployments by using AWS CodeDeploy in CloudFormation. Configure the template to perform ECS blue/green deployments in production.
B.
Add an AWS::CodeDeployBlueGreen Transform section and Hook section to the template to enable blue/green deployments by using AWS CodeDeploy in CloudFormation. Configure the template to perform ECS blue/green deployments in production.
Answers
C.
Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Write integration tests for each service. Run the tests automatically after deployment.
C.
Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Write integration tests for each service. Run the tests automatically after deployment.
Answers
D.
Use an ECS DeploymentConfiguration parameter in the template to configure AWS CodeDeploy to perform a rolling update of the service. Use a CircuitBreaker property to roll back the deployment if any error occurs during deployment.
D.
Use an ECS DeploymentConfiguration parameter in the template to configure AWS CodeDeploy to perform a rolling update of the service. Use a CircuitBreaker property to roll back the deployment if any error occurs during deployment.
Answers
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/blogs/devops/using-aws-codepipeline-for-deploying-container-images-to-microservicesarchitecture-involving-aws-lambda-functions/

What types of identities do Amazon Cognito identity pools support?

A.
They support both authenticated and unauthenticated identities.
A.
They support both authenticated and unauthenticated identities.
Answers
B.
They support only unauthenticated identities.
B.
They support only unauthenticated identities.
Answers
C.
They support neither authenticated nor unauthenticated identities.
C.
They support neither authenticated nor unauthenticated identities.
Answers
D.
They support only authenticated identities.
D.
They support only authenticated identities.
Answers
Suggested answer: A

Explanation:

Amazon Cognito identity pools support both authenticated and unauthenticated identities. Authenticated identities belong to users who are authenticated by a public login provider or your own backend authentication process. Unauthenticated identities typically belong to guest users.

Reference: http://docs.aws.amazon.com/cognito/devguide/identity/identity-pools/


A user has created a VPC with public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24. The NAT instance ID is i-a12345. Which of the below mentioned entries are required in the main route table attached with the private subnet to allow instances to connect with the internet?

A.
Destination: 20.0.0.0/0 and Target: 80
A.
Destination: 20.0.0.0/0 and Target: 80
Answers
B.
Destination: 20.0.0.0/0 and Target: i-a12345
B.
Destination: 20.0.0.0/0 and Target: i-a12345
Answers
C.
Destination: 20.0.0.0/24 and Target: i-a12345
C.
Destination: 20.0.0.0/24 and Target: i-a12345
Answers
D.
Destination: 0.0.0.0/0 and Target: i-a12345
D.
Destination: 0.0.0.0/0 and Target: i-a12345
Answers
Suggested answer: D

Explanation:

A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet, the instances in the public subnet can receive inbound traffic directly from the Internet, whereas the instances in the private subnet cannot. If these subnets are created with Wizard, AWS will create two route tables and attach to the subnets. The main route table will have the entry "Destination: 0.0.0.0/0 and Target: i-a12345", which allows all the instances in the private subnet to connect to the internet using NAT.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

A company is migrating its applications to AWS. The applications will be deployed to AWS accounts owned by business units. The company has several teams of developers who are responsible for the development and maintenance of all applications. The company is expecting rapid growth in the number of users.

The company's chief technology officer has the following requirements:

Developers must launch the AWS infrastructure using AWS CloudFormation.

Developers must not be able to create resources outside of CloudFormation. The solution must be able to scale to hundreds of AWS accounts. Which of the following would meet these requirements? (Choose two.)

A.
Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the company needs. Use CloudFormation StackSets to deploy this template to each AWS account.
A.
Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the company needs. Use CloudFormation StackSets to deploy this template to each AWS account.
Answers
B.
In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation.
B.
In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation.
Answers
C.
Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this template to each AWS account.
C.
Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this template to each AWS account.
Answers
D.
Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use CloudFormation StackSets to deploy this template to each AWS account.
D.
Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use CloudFormation StackSets to deploy this template to each AWS account.
Answers
E.
In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to deploy the CloudFormation stack policy to each AWS account.
E.
In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to deploy the CloudFormation stack policy to each AWS account.
Answers
Suggested answer: C, E

Explanation:

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html

A manufacturing company is growing exponentially and has secured funding to improve its IT infrastructure and ecommerce presence. The company’s ecommerce platform consists of:

Static assets primarily comprised of product images stored in Amazon S3.

Amazon DynamoDB tables that store product information, user information, and order information. Web servers containing the application’s front-end behind Elastic Load Balancers. The company wants to set up a disaster recovery site in a separate Region.

Which combination of actions should the solutions architect take to implement the new design while meeting all the requirements? (Choose three.)

A.
Enable Amazon Route 53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
A.
Enable Amazon Route 53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
Answers
B.
Enable Amazon S3 cross-Region replication on the buckets that contain static assets.
B.
Enable Amazon S3 cross-Region replication on the buckets that contain static assets.
Answers
C.
Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
C.
Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
Answers
D.
Enable DynamoDB global tables to achieve a multi-Region table replication.
D.
Enable DynamoDB global tables to achieve a multi-Region table replication.
Answers
E.
Enable Amazon CloudWatch and create CloudWatch alarms that route traffic to the disaster recovery site when application latency exceeds the desired threshold.
E.
Enable Amazon CloudWatch and create CloudWatch alarms that route traffic to the disaster recovery site when application latency exceeds the desired threshold.
Answers
F.
Enable Amazon S3 versioning on the source and destination buckets containing static assets to ensure there is a rollback version available in the event of data corruption.
F.
Enable Amazon S3 versioning on the source and destination buckets containing static assets to ensure there is a rollback version available in the event of data corruption.
Answers
Suggested answer: A, E, F
Total 906 questions
Go to page: of 91