ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

Related questions











A solutions architect is implementing infrastructure as code for a two-tier web application in an AWS CloudFormation template. The web frontend application will be deployed on Amazon EC2 instances in an Auto Scaling group. The backend database will be an Amazon RDS for MySQL DB instance. The database password will be rotated every 60 days. How can the solutions architect MOST securely manage the configuration of the application's database credentials?

A.
Provide the database password as a parameter in the CloudFormation template. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the password parameter using the Ref intrinsic function.Store the password on the EC2 instances. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function.
A.
Provide the database password as a parameter in the CloudFormation template. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the password parameter using the Ref intrinsic function.Store the password on the EC2 instances. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function.
Answers
B.
Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Configure the application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference.
B.
Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Configure the application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference.
Answers
C.
Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the secret resource using the Ref intrinsic function. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function.
C.
Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the secret resource using the Ref intrinsic function. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function.
Answers
D.
Create a new AWS Systems Manager Parameter Store parameter in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the parameter. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Fn::GetAtt intrinsic function.
D.
Create a new AWS Systems Manager Parameter Store parameter in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the parameter. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Fn::GetAtt intrinsic function.
Answers
Suggested answer: D

A company is developing a web application that runs on Amazon EC2 instances in an Auto Scaling group behind a publicfacing Application Load Balancer (ALB). Only users from a specific country are allowed to access the application. The company needs the ability to log the access requests that have been blocked. The solution should require the least possible maintenance. Which solution meets these requirements?

A.
Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.
A.
Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.
Answers
B.
Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB.
B.
Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB.
Answers
C.
Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB.
C.
Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB.
Answers
D.
Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB.
D.
Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB.
Answers
Suggested answer: A

A user is creating a Provisioned IOPS volume. What is the maximum ratio the user should configure between Provisioned IOPS and the volume size?

A.
30 to 1
A.
30 to 1
Answers
B.
50 to 1
B.
50 to 1
Answers
C.
10 to 1
C.
10 to 1
Answers
D.
20 to 1
D.
20 to 1
Answers
Suggested answer: B

Explanation:

Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. An io1 volume can range in size from 4 GiB to 16 TiB and you can provision 100 up to 20,000 IOPS per volume. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. For example, a 100 GiB volume can be provisioned with up to 5,000 IOPS. Any volume 400 GiB in size or greater allows provisioning up to the 20,000 IOPS maximum.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Your company is storing millions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5 TB of space, to generate simulations that can be used to steer business decisions.

You are required to design an AWS solution that can cost effectively accommodate the long-term storage and in-flight subsets of data. Which approach can satisfy these objectives?

A.
Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
A.
Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
Answers
B.
Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
B.
Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
Answers
C.
Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
C.
Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
Answers
D.
Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
D.
Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
Answers
E.
Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.
E.
Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.
Answers
Suggested answer: D

A user has suspended the scaling process on the Auto Scaling group. A scaling activity to increase the instance count was already in progress. What effect will the suspension have on that activity?

A.
No effect. The scaling activity continues
A.
No effect. The scaling activity continues
Answers
B.
Pauses the instance launch and launches it only after Auto Scaling is resumed
B.
Pauses the instance launch and launches it only after Auto Scaling is resumed
Answers
C.
Terminates the instance
C.
Terminates the instance
Answers
D.
Stops the instance temporary
D.
Stops the instance temporary
Answers
Suggested answer: A

Explanation:

The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency situations. To perform this, the user can suspend one or more scaling processes at any time. When this process is suspended, Auto Scaling creates no new scaling activities for that group. Scaling activities that were already in progress before the group was suspended continue until completed.

Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html

An ecommerce company has an order processing application it wants to migrate to AWS. The application has inconsistent data volume patterns, but needs to be avail at all times. Orders must be processed as they occur and in the order that they are received.

Which set of steps should a solutions architect take to meet these requirements?

A.
Use AWS Transfer for SFTP and upload orders as they occur. Use On-Demand Instances in multiple Availability Zones for processing.
A.
Use AWS Transfer for SFTP and upload orders as they occur. Use On-Demand Instances in multiple Availability Zones for processing.
Answers
B.
Use Amazon SNS with FIFO and send orders as they occur. Use a single large Reserved Instance for processing.
B.
Use Amazon SNS with FIFO and send orders as they occur. Use a single large Reserved Instance for processing.
Answers
C.
Use Amazon SQS with FIFO and send orders as they occur. Use Reserved Instances in multiple Availability Zones for processing.
C.
Use Amazon SQS with FIFO and send orders as they occur. Use Reserved Instances in multiple Availability Zones for processing.
Answers
D.
Use Amazon SQS with FIFO and send orders as they occur. Use Spot Instances in multiple Availability Zones for processing.
D.
Use Amazon SQS with FIFO and send orders as they occur. Use Spot Instances in multiple Availability Zones for processing.
Answers
Suggested answer: C

You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude.

How do you fix your usage dashboard?

A.
Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
A.
Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
Answers
B.
Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
B.
Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
Answers
C.
Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
C.
Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
Answers
D.
Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.
D.
Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.
Answers
E.
Use Elastic Beanstalk "Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.
E.
Use Elastic Beanstalk "Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.
Answers
Suggested answer: D

Explanation:

Reference: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html

A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users are on the system simultaneously. Issues are caused by:

Limits around concurrent executions.

The performance of Amazon DynamoDB when saving data.

Which actions can be taken to increase the performance and reliability of the application? (Choose two.)

A.
Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables.
A.
Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables.
Answers
B.
Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables.
B.
Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables.
Answers
C.
Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
C.
Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
Answers
D.
Configure a dead letter queue that will reprocess failed or timed-out Lambda functions.
D.
Configure a dead letter queue that will reprocess failed or timed-out Lambda functions.
Answers
E.
Use S3 Transfer Acceleration to provide lower-latency access to end users.
E.
Use S3 Transfer Acceleration to provide lower-latency access to end users.
Answers
Suggested answer: B, D

Explanation:

Reference:

B: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.htmlD:

https://aws.amazon.com/blogs/compute/robust-serverless-application-design-with-aws-lambda-dlq/

A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts. AWS PrivateLink is being used to provide connectivity between the client services and the logging service.

In each AWS account with a client an interface endpoint has been created for the logging service and is available. The logging service running on EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC endpoint.

Which combination of steps should a solutions architect take to resolve this issue? (Choose two.)

A.
Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
A.
Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
Answers
B.
Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets. Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running on EC2 instances.
B.
Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets. Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running on EC2 instances.
Answers
C.
Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.
C.
Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.
Answers
D.
Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the clients.
D.
Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the clients.
Answers
E.
Check the security group for the NLB to ensure it allows ingress from the interface endpoint subnets.
E.
Check the security group for the NLB to ensure it allows ingress from the interface endpoint subnets.
Answers
Suggested answer: D, E

A company has asked a Solutions Architect to design a secure content management solution that can be accessed by API calls by external customer applications. The company requires that a customer administrator must be able to submit an API call and roll back changes to existing files sent to the content management solution, as needed. What is the MOST secure deployment design that meets all solution requirements?

A.
Use Amazon S3 for object storage with versioning and bucket access logging enabled, and an IAM role and access policy for each customer application. Encrypt objects using SSE-KMS. Develop the content management application to use a separate AWS KMS key for each customer.
A.
Use Amazon S3 for object storage with versioning and bucket access logging enabled, and an IAM role and access policy for each customer application. Encrypt objects using SSE-KMS. Develop the content management application to use a separate AWS KMS key for each customer.
Answers
B.
Use Amazon WorkDocs for object storage. Leverage WorkDocs encryption, user access management, and version control. Use AWS CloudTrail to log all SDK actions and create reports of hourly access by using the Amazon CloudWatch dashboard. Enable a revert function in the SDK based on a static Amazon S3 webpage that shows the output of the CloudWatch dashboard.
B.
Use Amazon WorkDocs for object storage. Leverage WorkDocs encryption, user access management, and version control. Use AWS CloudTrail to log all SDK actions and create reports of hourly access by using the Amazon CloudWatch dashboard. Enable a revert function in the SDK based on a static Amazon S3 webpage that shows the output of the CloudWatch dashboard.
Answers
C.
Use Amazon EFS for object storage, using encryption at rest for the Amazon EFS volume and a customer managed key stored in AWS KMS. Use IAM roles and Amazon EFS access policies to specify separate encryption keys for each customer application. Deploy the content management application to store all new versions as new files in Amazon EFS and use a control API to revert a specific file to a previous version.
C.
Use Amazon EFS for object storage, using encryption at rest for the Amazon EFS volume and a customer managed key stored in AWS KMS. Use IAM roles and Amazon EFS access policies to specify separate encryption keys for each customer application. Deploy the content management application to store all new versions as new files in Amazon EFS and use a control API to revert a specific file to a previous version.
Answers
D.
Use Amazon S3 for object storage with versioning and enable S3 bucket access logging. Use an IAM role and access policy for each customer application. Encrypt objects using client-side encryption, and distribute an encryption key to all customers when accessing the content management application.
D.
Use Amazon S3 for object storage with versioning and enable S3 bucket access logging. Use an IAM role and access policy for each customer application. Encrypt objects using client-side encryption, and distribute an encryption key to all customers when accessing the content management application.
Answers
Suggested answer: A
Total 906 questions
Go to page: of 91