ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 34

Question list
Search
Search

List of questions

Search

Related questions











A large mobile gaming company has successfully migrated all of its on-premises infrastructure to the AWS Cloud. A solutions architect is reviewing the environment to ensure that it was built according to the design and that it is running in alignment with the Well-Architected Framework.

While reviewing previous monthly costs in Cost Explorer, the solutions architect notices that the creation and subsequent termination of several large instance types account for a high proportion of the costs. The solutions architect finds out that the company’s developers are launching new Amazon EC2 instances as part of their testing and that the developers are not using the appropriate instance types. The solutions architect must implement a control mechanism to limit the instance types that only the developers can launch. Which solution will meet these requirements?

A.
Create a desired-instance-type managed rule in AWS Config. Configure the rule with the instance types that are allowed. Attach the rule to an event to run each time a new EC2 instance is launched.
A.
Create a desired-instance-type managed rule in AWS Config. Configure the rule with the instance types that are allowed. Attach the rule to an event to run each time a new EC2 instance is launched.
Answers
B.
In the EC2 console, create a launch template that specifies the instance types that are allowed. Assign the launch template to the developers’ IAM accounts.
B.
In the EC2 console, create a launch template that specifies the instance types that are allowed. Assign the launch template to the developers’ IAM accounts.
Answers
C.
Create a new IAM policy. Specify the instance types that are allowed. Attach the policy to an IAM group that contains the IAM accounts for the developers
C.
Create a new IAM policy. Specify the instance types that are allowed. Attach the policy to an IAM group that contains the IAM accounts for the developers
Answers
D.
Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.
D.
Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.
Answers
Suggested answer: A

Explanation:

Reference: https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_getting-started.html

A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from a web application running behind an Application Load Balancer. The web application requires user authorization and session tracking for dynamic content. The CloudFront distribution has a single cache behavior configured to forward the Authorization, Host, and User-Agent HTTP whitelist headers and a session cookie to the origin. All other cache behavior settings are set to their default value.

A valid ACM certificate is applied to the CloudFront distribution with a matching CNAME in the distribution settings. The ACM certificate is also applied to the HTTPS listener for the Application Load Balancer. The CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that the miss rate for this distribution is very high. What can the Solutions Architect do to improve the cache hit rate for this distribution without causing the SSL/TLS handshake between CloudFront and the Application Load Balancer to fail?

A.
Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from thewhitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section andthe Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
A.
Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from thewhitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section andthe Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
Answers
B.
Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavior. Thenupdate the cache behavior to use presigned cookies for authorization.
B.
Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavior. Thenupdate the cache behavior to use presigned cookies for authorization.
Answers
C.
Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelistcookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewerrequest events for user authorization.
C.
Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelistcookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewerrequest events for user authorization.
Answers
D.
Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelistheaders section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and theAuthorization HTTP header from the whitelist headers section for cache behavior configured for static content.
D.
Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelistheaders section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and theAuthorization HTTP header from the whitelist headers section for cache behavior configured for static content.
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/understanding-the-cachekey.html

Removing

the host header will result in failed flow between CloudFront and ALB, because they have same certificate.

An AWS partner company is building a service in AWS Organizations using its organization named org1. This service requires the partner company to have access to AWS resources in a customer account, which is in a separate organization named org2. The company must establish least privilege security access using an API or command line tool to the customer account. What is the MOST secure way to allow org1 to access resources in org2?

A.
The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks.
A.
The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks.
Answers
B.
The customer should create an IAM user and assign the required permissions to the IAM user. The customer should then provide the credentials to the partner company to log in and perform the required tasks.
B.
The customer should create an IAM user and assign the required permissions to the IAM user. The customer should then provide the credentials to the partner company to log in and perform the required tasks.
Answers
C.
The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM role’s Amazon Resource Name (ARN) when requesting access to perform the required tasks.
C.
The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM role’s Amazon Resource Name (ARN) when requesting access to perform the required tasks.
Answers
D.
The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM role’s Amazon Resource Name (ARN), including the external ID in the IAM role’s trust policy, when requesting access to perform the required tasks.
D.
The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM role’s Amazon Resource Name (ARN), including the external ID in the IAM role’s trust policy, when requesting access to perform the required tasks.
Answers
Suggested answer: B

A Solutions Architect is working with a company that is extremely sensitive to its IT costs and wishes to implement controls that will result in a predictable AWS spend each month. Which combination of steps can help the company control and monitor its monthly AWS usage to achieve a cost that is as close as possible to the target amount? (Choose three.)

A.
Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when launching Amazon EC2 instances.
A.
Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when launching Amazon EC2 instances.
Answers
B.
Contact AWS Support and ask that they apply limits to the account so that users are not able to launch more than a certain number of instance types.
B.
Contact AWS Support and ask that they apply limits to the account so that users are not able to launch more than a certain number of instance types.
Answers
C.
Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2 usage.
C.
Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2 usage.
Answers
D.
Place conditions in the users’ IAM policies that limit the number of instances they are able to launch.
D.
Place conditions in the users’ IAM policies that limit the number of instances they are able to launch.
Answers
E.
Define ‘workload’ as a cost allocation tag in the AWS Billing and Cost Management console.
E.
Define ‘workload’ as a cost allocation tag in the AWS Billing and Cost Management console.
Answers
F.
Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.
F.
Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.
Answers
Suggested answer: A, E, F

A car rental company has built a serverless REST API to provide data to its mobile app. The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently opened the API to mobile apps of partners. A significant increase in the number of requests resulted, causing sporadic database memory errors. Analysis of the API traffic indicates that clients are making multiple HTTP GET requests for the same queries in a short period of time. Traffic is concentrated during business hours, with spikes around holidays and other events. The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution. Which strategy meets these requirements?

A.
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
A.
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
Answers
B.
Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
B.
Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
Answers
C.
Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory.
C.
Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory.
Answers
D.
Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.
D.
Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.
Answers
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodbcognito/module-4/

Out of the striping options available for the EBS volumes, which one has the following disadvantage:

'Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.'?

A.
Raid 1
A.
Raid 1
Answers
B.
Raid 0
B.
Raid 0
Answers
C.
RAID 1+0 (RAID 10)
C.
RAID 1+0 (RAID 10)
Answers
D.
Raid 2
D.
Raid 2
Answers
Suggested answer: C

Explanation:

RAID 1+0 (RAID 10) doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

Which of the following cannot be done using AWS Data Pipeline?

A.
Create complex data processing workloads that are fault tolerant, repeatable, and highly available.
A.
Create complex data processing workloads that are fault tolerant, repeatable, and highly available.
Answers
B.
Regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS service.
B.
Regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS service.
Answers
C.
Generate reports over data that has been stored.
C.
Generate reports over data that has been stored.
Answers
D.
Move data between different AWS compute and storage services as well as on premise data sources at specified intervals.
D.
Move data between different AWS compute and storage services as well as on premise data sources at specified intervals.
Answers
Suggested answer: C

Explanation:

AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise data sources at specified intervals. With AWS Data Pipeline, you can regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. AWS Data Pipeline also allows you to move and process data that was previously locked up in on premise data silos.

Reference: http://aws.amazon.com/datapipeline/

The following AWS Identity and Access Management (IAM) customer managed policy has been attached to an IAM user:

Which statement describes the access that this policy provides to the user?

A.
The policy grants access to all Amazon S3 actions, including all actions in the prod-data S3 bucket
A.
The policy grants access to all Amazon S3 actions, including all actions in the prod-data S3 bucket
Answers
B.
This policy denies access to all Amazon S3 actions, excluding all actions in the prod-data S3 bucket
B.
This policy denies access to all Amazon S3 actions, excluding all actions in the prod-data S3 bucket
Answers
C.
This policy denies access to the Amazon S3 bucket and objects not having prod-data in the bucket name
C.
This policy denies access to the Amazon S3 bucket and objects not having prod-data in the bucket name
Answers
D.
This policy grants access to all Amazon S3 actions in the prod-data S3 bucket, but explicitly denies access to all other AWS services
D.
This policy grants access to all Amazon S3 actions in the prod-data S3 bucket, but explicitly denies access to all other AWS services
Answers
Suggested answer: D

Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console. Which option below will meet the needs for your NOC members?

A.
Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console.
A.
Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console.
Answers
B.
Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
B.
Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
Answers
C.
Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
C.
Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
Answers
D.
Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.
D.
Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.
Answers
Suggested answer: C

Explanation:

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html

A company is currently using AWS CodeCommit for its source control and AWS CodePipeline for continuous integration. The pipeline has a build stage for building the artifacts, which is then staged in an Amazon S3 bucket. The company has identified various improvement opportunities in the existing process, and a Solutions Architect has been given the following requirements:

Create a new pipeline to support feature development

Support feature development without impacting production applications Incorporate continuous testing with unit tests Isolate development and production artifacts Support the capability to merge tested code into production code. How should the Solutions Architect achieve these requirements?

A.
Trigger a separate pipeline from CodeCommit feature branches. Use AWS CodeBuild for running unit tests. Use CodeBuild to stage the artifacts within an S3 bucket in a separate testing account.
A.
Trigger a separate pipeline from CodeCommit feature branches. Use AWS CodeBuild for running unit tests. Use CodeBuild to stage the artifacts within an S3 bucket in a separate testing account.
Answers
B.
Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the artifacts within an S3 bucket in a separate testing account.
B.
Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the artifacts within an S3 bucket in a separate testing account.
Answers
C.
Trigger a separate pipeline from CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target for staging the artifacts with an S3 bucket in a separate testing account.
C.
Trigger a separate pipeline from CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target for staging the artifacts with an S3 bucket in a separate testing account.
Answers
D.
Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.
D.
Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.
Answers
Suggested answer: A

Explanation:

Reference:

https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html

Total 906 questions
Go to page: of 91