ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 87

Question list
Search
Search

List of questions

Search

Related questions











A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon DynamoDB. The Developers account resides in a dedicated organizational unit (OU). The Solutions Architect has implemented the following SCP on the Developers account:

When this policy is deployed, IAM users in the Developers account are still able to use AWS services that are not listed in the policy. What should the Solutions Architect do to eliminate the Developers’ ability to use services outside the scope of this policy?

A.
Create an explicit deny statement for each AWS service that should be constrained.
A.
Create an explicit deny statement for each AWS service that should be constrained.
Answers
B.
Remove the FullAWSAccess SCP from the Developer account’s OU.
B.
Remove the FullAWSAccess SCP from the Developer account’s OU.
Answers
C.
Modify the FullAWSAccess SCP to explicitly deny all services.
C.
Modify the FullAWSAccess SCP to explicitly deny all services.
Answers
D.
Add an explicit deny statement using a wildcard to the end of the SCP.
D.
Add an explicit deny statement using a wildcard to the end of the SCP.
Answers
Suggested answer: B

A company runs a software-as-a-service (SaaS) application on AWS. The application consists of AWS Lambda functions and an Amazon RDS for MySQL Multi-AZ database. During market events, the application has a much higher workload than normal. Users notice slow response times during the peak periods because of many database connections. The company needs to improve the scalable performance and availability of the database. Which solution meets these requirements?

A.
Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold.
A.
Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold.
Answers
B.
Migrate the database to Amazon Aurora, and add a read replica. Add a database connection pool outside of the Lambda handler function.
B.
Migrate the database to Amazon Aurora, and add a read replica. Add a database connection pool outside of the Lambda handler function.
Answers
C.
Migrate the database to Amazon Aurora, and add a read replica. Use Amazon Route 53 weighted records.
C.
Migrate the database to Amazon Aurora, and add a read replica. Use Amazon Route 53 weighted records.
Answers
D.
Migrate the database to Amazon Aurora, and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools.
D.
Migrate the database to Amazon Aurora, and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools.
Answers
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/blogs/database/tag/aws-lambda/feed/

How does AWS Data Pipeline execute activities on on-premise resources or AWS resources that you manage?

A.
By supplying a Task Runner package that can be installed on your on-premise hosts
A.
By supplying a Task Runner package that can be installed on your on-premise hosts
Answers
B.
None of these
B.
None of these
Answers
C.
By supplying a Task Runner file that the resources can access for execution
C.
By supplying a Task Runner file that the resources can access for execution
Answers
D.
By supplying a Task Runner json script that can be installed on your on-premise hosts
D.
By supplying a Task Runner json script that can be installed on your on-premise hosts
Answers
Suggested answer: A

Explanation:

To enable running activities using on-premise resources, AWS Data Pipeline does the following: It supply a Task Runner package that can be installed on your on-premise hosts. This package continuously polls the AWS Data Pipeline service for work to perform. When it's time to run a particular activity on your on-premise resources, it will issue the appropriate command to the Task Runner.

Reference:

https://aws.amazon.com/datapipeline/faqs/

A Provisioned IOPS volume must be at least __________ GB in size:

A.
20
A.
20
Answers
B.
10
B.
10
Answers
C.
50
C.
50
Answers
D.
1
D.
1
Answers
Suggested answer: B

Explanation:

A Provisioned IOPS volume must be at least 10 GB in size

Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

A company is planning to deploy a new business analytics application that requires 10,000 hours of compute time each month. The compute resources can have flexible availability, but must be as costeffective as possible. The company will also provide a reporting service to distribute analytics reports, which needs to run at all times.

How should the Solutions Architect design a solution that meets these requirements?

A.
Deploy the reporting service on a Spot Fleet. Deploy the analytics application as a container in Amazon ECS with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
A.
Deploy the reporting service on a Spot Fleet. Deploy the analytics application as a container in Amazon ECS with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
Answers
B.
Deploy the reporting service on an On-Demand Instance. Deploy the analytics application as a container in AWS Batch with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
B.
Deploy the reporting service on an On-Demand Instance. Deploy the analytics application as a container in AWS Batch with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling.
Answers
C.
Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on a Spot Fleet. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the Spot Fleet.
C.
Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on a Spot Fleet. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the Spot Fleet.
Answers
D.
Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on an On-Demand Instance and purchase a Reserved Instance with a 3-year term. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the On-Demand Instance.
D.
Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on an On-Demand Instance and purchase a Reserved Instance with a 3-year term. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the On-Demand Instance.
Answers
Suggested answer: C

A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL. The company’s internet link is 50 MB with a VPN in the Amazon VPC, and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover. The cutover must take place within an 8-day period. What is the LEAST complex method of migrating the database securely and reliably?

A.
Order an AWS Snowball device and copy the database using the AWS DMS. When the database is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
A.
Order an AWS Snowball device and copy the database using the AWS DMS. When the database is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
Answers
B.
Create an AWS DMS job to continuously replicate the data from on premises to AWS. Cutover to Amazon RDS after the data is synchronized.
B.
Create an AWS DMS job to continuously replicate the data from on premises to AWS. Cutover to Amazon RDS after the data is synchronized.
Answers
C.
Order an AWS Snowball device and copy a database dump to the device. After the data has been copied to Amazon S3, import it to the Amazon RDS instance. Set up log shipping over a VPN to synchronize changes before the cutover.
C.
Order an AWS Snowball device and copy a database dump to the device. After the data has been copied to Amazon S3, import it to the Amazon RDS instance. Set up log shipping over a VPN to synchronize changes before the cutover.
Answers
D.
Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool. When the data is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
D.
Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool. When the data is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
Answers
Suggested answer: B

A company is using multiple AWS accounts. The company has a shared service account and several other accounts for different projects. A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect gateway by using an already-tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a role by using AWS Security Token Service (AWS STS).

The team is using AWS CloudFormation to deploy its infrastructure.

Which combination of steps will meet these requirements? (Choose three.)

A.
Deploy the Lambda function to the project account. Update the Lambda function’s IAM role with the directconnect:* permission.
A.
Deploy the Lambda function to the project account. Update the Lambda function’s IAM role with the directconnect:* permission.
Answers
B.
Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account.
B.
Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account.
Answers
C.
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account.
C.
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account.
Answers
D.
Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda function’s IAM role with the directconnect:* permission.
D.
Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda function’s IAM role with the directconnect:* permission.
Answers
E.
Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account.
E.
Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account.
Answers
F.
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.
F.
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.
Answers
Suggested answer: C, E, F

Which of the following are characteristics of Amazon VPC subnets? (Choose two.)

A.
Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
A.
Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
Answers
B.
Each subnet maps to a single Availability Zone.
B.
Each subnet maps to a single Availability Zone.
Answers
C.
CIDR block mask of /25 is the smallest range supported.
C.
CIDR block mask of /25 is the smallest range supported.
Answers
D.
By default, all subnets can route between each other, whether they are private or public.
D.
By default, all subnets can route between each other, whether they are private or public.
Answers
E.
Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
E.
Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
Answers
Suggested answer: B, D

Which of following IAM policy elements lets you specify an exception to a list of actions?

A.
NotException
A.
NotException
Answers
B.
ExceptionAction
B.
ExceptionAction
Answers
C.
Exception
C.
Exception
Answers
D.
NotAction
D.
NotAction
Answers
Suggested answer: D

Explanation:

The NotAction element lets you specify an exception to a list of actions.

A company has a platform that contains an Amazon S3 bucket for user content. The S3 bucket has thousands of terabytes of objects, all in the S3 Standard storage class. The company has an RTO of 6 hours The company must replicate the data from its primary AWS Region to a replication S3 bucket in another Region The user content S3 bucket contains user-uploaded files such as videos and photos. The user content S3 bucket has an unpredictable access pattern. The number of users is increasing quickly, and the company wants to create an S3 Lifecycle policy to reduce storage costs Which combination of steps will meet these requirements MOST cost-effectively'? (Select TWO )

A.
Move the objects in the user content S3 bucket to S3 Intelligent-Tiering immediately
A.
Move the objects in the user content S3 bucket to S3 Intelligent-Tiering immediately
Answers
B.
Move the objects in the user content S3 bucket to S3 Intelligent-Tiering after 30 days
B.
Move the objects in the user content S3 bucket to S3 Intelligent-Tiering after 30 days
Answers
C.
Move the objects in the replication S3 bucket to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier after 90 days
C.
Move the objects in the replication S3 bucket to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier after 90 days
Answers
D.
Move the objects in the replication S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days and to S3 Glacier Deep Archive after 90 days
D.
Move the objects in the replication S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days and to S3 Glacier Deep Archive after 90 days
Answers
E.
Move the objects in the replication S3 bucket to S3 Standard-infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier Deep Archive after 180 days
E.
Move the objects in the replication S3 bucket to S3 Standard-infrequent Access (S3 Standard-IA) after 30 days and to S3 Glacier Deep Archive after 180 days
Answers
Suggested answer: A, D
Total 906 questions
Go to page: of 91