ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 43

Question list
Search
Search

List of questions

Search

Related questions











You have a high-traffic application running behind a load balancer with clients that are very sensitive to latency. How should you determine which back-end Amazon Elastic Compute Cloud application instances are causing increased latency so that they can be replaced?

A.
By using the Elastic Load Balancing Latency CloudWatch metric.
A.
By using the Elastic Load Balancing Latency CloudWatch metric.
Answers
B.
By using the HTTP X-Forwarded-For header for requests from the load balancer.
B.
By using the HTTP X-Forwarded-For header for requests from the load balancer.
Answers
C.
By running a distributed load test to the load balancer.
C.
By running a distributed load test to the load balancer.
Answers
D.
By using the load balancer access logs.
D.
By using the load balancer access logs.
Answers
Suggested answer: D

You would like to run automated, continuous application level integration tests on all members of an Auto Scaling group. Which two options should you use? (Choose two.)

A.
Use the AWS SDK to run the DescribeInstances API call on the Auto Scaling group, and then iterate over the members and remotely connect to each Amazon EC2 instance and run the integration tests.
A.
Use the AWS SDK to run the DescribeInstances API call on the Auto Scaling group, and then iterate over the members and remotely connect to each Amazon EC2 instance and run the integration tests.
Answers
B.
Use the AWS SDK to run the DescribeAutoScalinglnstances API call on the Auto Scaling Group, iterate over the members using the Describe Instances API call, remotely connect to each Amazon EC2 instance, and then run the integration tests.
B.
Use the AWS SDK to run the DescribeAutoScalinglnstances API call on the Auto Scaling Group, iterate over the members using the Describe Instances API call, remotely connect to each Amazon EC2 instance, and then run the integration tests.
Answers
C.
Set up a custom CloudWatch metric with the output of your integration tests that are run by a scheduled process on each instance, and then set up a CloudWatch alert for any failures.
C.
Set up a custom CloudWatch metric with the output of your integration tests that are run by a scheduled process on each instance, and then set up a CloudWatch alert for any failures.
Answers
D.
Using an Auto Cycle Group lifecycle policy, define a scheduled task to run integration tests when a new Amazon EC2 instance enters the InService state.
D.
Using an Auto Cycle Group lifecycle policy, define a scheduled task to run integration tests when a new Amazon EC2 instance enters the InService state.
Answers
E.
Set up a custom CloudWatch metric that uses the output of the DescribeAutoScalingInstances API call to determine the HealthCheck status of the Amazon EC2 instances.
E.
Set up a custom CloudWatch metric that uses the output of the DescribeAutoScalingInstances API call to determine the HealthCheck status of the Amazon EC2 instances.
Answers
F.
Using the Auto Cycle Group lifecycle policy, define a scheduled task to run integration tests on individual instances using the Amazon EC2 user data to export test data to CloudWatch Logs.
F.
Using the Auto Cycle Group lifecycle policy, define a scheduled task to run integration tests on individual instances using the Amazon EC2 user data to export test data to CloudWatch Logs.
Answers
Suggested answer: B, C

You have an application running on Amazon EC2 in an Auto Scaling group. Instances are being bootstrapped dynamically, and the bootstrapping takes over 15 minutes to complete. You find that instances are reported by Auto Scaling as being In Service before bootstrapping has completed. You are receiving application alarms related to new instances before they have completed bootstrapping, which is causing confusion. You find the cause: your application monitoring tool is polling the Auto Scaling Service API for instances that are In Service, and creating alarms for new previously unknown instances. Which of the following will ensure that new instances are not added to your application monitoring tool before bootstrapping is completed?

A.
Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete. Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state.
A.
Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete. Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state.
Answers
B.
Use the default Amazon CloudWatch application metrics to monitor your application's health. Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients.
B.
Use the default Amazon CloudWatch application metrics to monitor your application's health. Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients.
Answers
C.
Tag all instances on launch to identify that they are in a pending state. Change your application monitoring tool to look for this tag before adding new instances, and the use the Amazon API to set the instance state to 'pending' until bootstrapping is complete.
C.
Tag all instances on launch to identify that they are in a pending state. Change your application monitoring tool to look for this tag before adding new instances, and the use the Amazon API to set the instance state to 'pending' until bootstrapping is complete.
Answers
D.
Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances.
D.
Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances.
Answers
Suggested answer: A

If designing a single playbook to run across multiple Linux distributions that have distribution specific commands, what would be the best method to allow a successful run?

A.
Enable fact gathering and use the `when' conditional to match the distribution to the task.
A.
Enable fact gathering and use the `when' conditional to match the distribution to the task.
Answers
B.
This is not possible, a separate playbook for each target Linux distribution is required.
B.
This is not possible, a separate playbook for each target Linux distribution is required.
Answers
C.
Use `ignore_errors: true' in the tasks.
C.
Use `ignore_errors: true' in the tasks.
Answers
D.
Use the `shell' module to write your own checks for each command that is ran.
D.
Use the `shell' module to write your own checks for each command that is ran.
Answers
Suggested answer: A

Explanation:

Ansible provides a method to only run a task when a condition is met using the `when' declarative. With gather facts enabled, the play has access to the distribution name of the Linux system, thus, tasks can be tailored to a specific distribution and ran only when the condition is met, e.g.: ` - when: ansible_os_family == "Debian"'.

Reference: http://docs.ansible.com/ansible/playbooks_conditionals.html

A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon ECS using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back. Which strategy will meet these requirements?

A.
Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create an execution environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
A.
Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create an execution environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
Answers
B.
Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to execute an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
B.
Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to execute an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
Answers
C.
Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger rollback.
C.
Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger rollback.
Answers
D.
Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.
D.
Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.
Answers
Suggested answer: C

Explanation:

Reference: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html

You run a SIP-based telephony application that uses Amazon EC2 for its web tier and uses MySQL on Amazon RDS as its database. The application stores only the authentication profile data for its existing users in the database and therefore is read-intensive. Your monitoring system shows that your web instances and the database have high CPU utilization. Which of the following steps should you take in order to ensure the continual availability of your application? (Choose two.)

A.
Use a CloudFront RTMP download distribution with the application tier as the origin for the distribution.
A.
Use a CloudFront RTMP download distribution with the application tier as the origin for the distribution.
Answers
B.
Set up an Auto Scaling group for the application tier and a policy that scales based on the Amazon EC2 CloudWatch CPU utilization metric.
B.
Set up an Auto Scaling group for the application tier and a policy that scales based on the Amazon EC2 CloudWatch CPU utilization metric.
Answers
C.
Vertically scale up the Amazon EC2 instances manually.
C.
Vertically scale up the Amazon EC2 instances manually.
Answers
D.
Set up an Auto Scaling group for the application tier and a policy that scales based on the Amazon RDS CloudWatch CPU utilization metric.
D.
Set up an Auto Scaling group for the application tier and a policy that scales based on the Amazon RDS CloudWatch CPU utilization metric.
Answers
E.
Switch to General Purpose (SSD) Storage from Provisioned IOPS Storage (PIOPS) for the Amazon RDS database.
E.
Switch to General Purpose (SSD) Storage from Provisioned IOPS Storage (PIOPS) for the Amazon RDS database.
Answers
F.
Use multiple Amazon RDS read replicas.
F.
Use multiple Amazon RDS read replicas.
Answers
Suggested answer: B, F

A DevOps Engineer has been asked by the Security team to ensure that AWS CloudTrail files are not tampered with after being created. Currently, there is a process with multiple trails, using AWS IAM to restrict access to specific trails. The Security team wants to ensure they can trace the integrity of each file and make sure there has been no tampering. Which option will require the LEAST effort to implement and ensure the legitimacy of the file while allowing the Security team to prove the authenticity of the logs?

A.
Create an Amazon CloudWatch Events rule that triggers an AWS Lambda function when a new file is delivered. Configure the Lambda function to perform an MD5 hash check on the file, store the name and location of the file, and post the returned hash to an Amazon DynamoDB table. The Security team can use the values stored in DynamoDB to verify the file authenticity.
A.
Create an Amazon CloudWatch Events rule that triggers an AWS Lambda function when a new file is delivered. Configure the Lambda function to perform an MD5 hash check on the file, store the name and location of the file, and post the returned hash to an Amazon DynamoDB table. The Security team can use the values stored in DynamoDB to verify the file authenticity.
Answers
B.
Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the Security team access to the file integrity logs stored in the S3 bucket.
B.
Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the Security team access to the file integrity logs stored in the S3 bucket.
Answers
C.
Enable the CloudTrail file integrity feature on the trail. Use the digest file created by CloudTrail to verify the integrity of the delivered CloudTrail files.
C.
Enable the CloudTrail file integrity feature on the trail. Use the digest file created by CloudTrail to verify the integrity of the delivered CloudTrail files.
Answers
D.
Create an AWS Lambda function that is triggered each time a new file is delivered to the CloudTrail bucket. Configure the Lambda function to execute an MD5 hash check on the file, and store the result on a tag in an Amazon S3 object. The Security team can use the information on the tag to verify the integrity of the file.
D.
Create an AWS Lambda function that is triggered each time a new file is delivered to the CloudTrail bucket. Configure the Lambda function to execute an MD5 hash check on the file, and store the result on a tag in an Amazon S3 object. The Security team can use the information on the tag to verify the integrity of the file.
Answers
Suggested answer: C

You are building an AWS CloudFormation template for a multi-tier web application. The user data of your Linux web server resource contains a complex script that can take a long time to run. Which techniques could you use to ensure that these servers are fully configured and running before attaching them to the load balancer? (Choose two.)

A.
Launch your Linux servers from a nested stack that is called from within the load balancer resource in your AWS CloudFormation template.
A.
Launch your Linux servers from a nested stack that is called from within the load balancer resource in your AWS CloudFormation template.
Answers
B.
Add an AWS CloudFormation Wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use curl to send a signal the Wait Condition at http://169.254.169.254/waithandle/.
B.
Add an AWS CloudFormation Wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use curl to send a signal the Wait Condition at http://169.254.169.254/waithandle/.
Answers
C.
Add an AWS CloudFormation wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use curl to signal to the Wait Condition pre-signed URL that they are ready.
C.
Add an AWS CloudFormation wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use curl to signal to the Wait Condition pre-signed URL that they are ready.
Answers
D.
In your AWS CloudFormation template, position the load balancer resource JSON block directly below your Linux server resource.
D.
In your AWS CloudFormation template, position the load balancer resource JSON block directly below your Linux server resource.
Answers
E.
Add an AWS CloudFormation Wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use the command "cfn-signal" to signal that they are ready.
E.
Add an AWS CloudFormation Wait Condition that depends on the web server resource. When the UserData script finishes on the web servers, use the command "cfn-signal" to signal that they are ready.
Answers
Suggested answer: C, E

You are running Amazon CloudTrail on an Amazon S3 bucket and look at your most recent log. You notice that the entries include the ListThings and CreateThings actions and wonder if your devices have been hacked. Based on these entries, what service would you be concerned may have been hacked?

A.
Amazon Inspector
A.
Amazon Inspector
Answers
B.
AWS IoT
B.
AWS IoT
Answers
C.
AWS CodePipeline
C.
AWS CodePipeline
Answers
D.
Amazon Glacier
D.
Amazon Glacier
Answers
Suggested answer: B

Explanation:

AWS IoT (Internet of Things) is integrated with CloudTrail to capture API calls from the AWS IoT console or from your code to the AWS IoT APIs. AWS IoT provides secure, bi-directional communication between Internet-connected things (such as sensors, actuators, embedded devices, or smart appliances) and the AWS cloud. Using the information collected by CloudTrail, you can determine the request that was made to AWS IoT, the source IP address from which the request was made, who made the request, when it was made, and so on.

Reference: http://docs.aws.amazon.com/iot/latest/developerguide/monitoring_overview.html#iot-usingcloudtrail

When specifying more than one conditional requirements for a task, what is the proper method?

A.
- when: foo == "hello" and bar == "world"
A.
- when: foo == "hello" and bar == "world"
Answers
B.
- when: foo == "hello" - when: bar == "world"
B.
- when: foo == "hello" - when: bar == "world"
Answers
C.
- when: foo == "hello" && bar == "world"
C.
- when: foo == "hello" && bar == "world"
Answers
D.
- when: foo is "hello" and bar is "world"
D.
- when: foo is "hello" and bar is "world"
Answers
Suggested answer: A

Explanation:

Ansible will allow you to stack conditionals using ‘and’ and ‘or’. It requires it to be in the same ‘when’ statement, comparisons must be ‘==’ for equals or ‘!=’ for not equals and the ‘and/or’ must be written as such, not ‘&&/||’.

Reference: http://docs.ansible.com/ansible/playbooks_conditionals.html#the-when-statement

Total 557 questions
Go to page: of 56