ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 28

Question list
Search
Search

List of questions

Search

Related questions











Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?

A.
Set a longer cooldown period on the Group, so the system stops overshooting the target capacity. The issue is that the scaling system does not allow enough time for new instances to begin servicing requests before measuring aggregate load again.
A.
Set a longer cooldown period on the Group, so the system stops overshooting the target capacity. The issue is that the scaling system does not allow enough time for new instances to begin servicing requests before measuring aggregate load again.
Answers
B.
Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.
B.
Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.
Answers
C.
Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.
C.
Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.
Answers
D.
Use larger instances instead of many smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.
D.
Use larger instances instead of many smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.
Answers
Suggested answer: B

Explanation:

Systems will always over-scale unless you choose the metric that runs out first and becomes constrained first. You also need to set the thresholds of the metric based on whether or not latency is affected by the change, to justify adding capacity instead of wasting money.

Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/policy_creating.html

A DevOps Engineer is building a multi-stage pipeline with AWS CodePipeline to build, verify, stage, test, and deploy an application. There is a manual approval stage required between the test and deploy stages. The Development team uses a team chat tool with webhook support.

How can the Engineer configure status updates for pipeline activity and approval requests to post to the chat tool?

A.
Create an AWS CloudWatch Logs subscription that filters on “detail-type”: “CodePipeline Pipeline Execution State Change.” Forward that to an Amazon SNS topic. Add the chat webhook URL to the SNS topic as a subscriber and complete the subscription validation.
A.
Create an AWS CloudWatch Logs subscription that filters on “detail-type”: “CodePipeline Pipeline Execution State Change.” Forward that to an Amazon SNS topic. Add the chat webhook URL to the SNS topic as a subscriber and complete the subscription validation.
Answers
B.
Create an AWS Lambda function that is triggered by the updating of AWS CloudTrail events. When a “CodePipeline Pipeline Execution State Change” event is detected in the updated events, send the event details to the chat webhook URL.
B.
Create an AWS Lambda function that is triggered by the updating of AWS CloudTrail events. When a “CodePipeline Pipeline Execution State Change” event is detected in the updated events, send the event details to the chat webhook URL.
Answers
C.
Create an AWS CloudWatch Events rule that filters on “CodePipeline Pipeline Execution State Change.” Forward that to an Amazon SNS topic. Subscribe an AWS Lambda function to the Amazon SNS topic and have it forward the event to the chat webhook URL.
C.
Create an AWS CloudWatch Events rule that filters on “CodePipeline Pipeline Execution State Change.” Forward that to an Amazon SNS topic. Subscribe an AWS Lambda function to the Amazon SNS topic and have it forward the event to the chat webhook URL.
Answers
D.
Modify the pipeline code to send event details to the chat webhook URL at the end of each stage. Parameterize the URL so each pipeline can send to a different URL based on the pipeline environment.
D.
Modify the pipeline code to send event details to the chat webhook URL at the end of each stage. Parameterize the URL so each pipeline can send to a different URL based on the pipeline environment.
Answers
Suggested answer: C

Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? (Choose two.)

A.
Create an Amazon S3 bucket per user, and use your application to generate the S3 URI for the appropriate content.
A.
Create an Amazon S3 bucket per user, and use your application to generate the S3 URI for the appropriate content.
Answers
B.
Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
B.
Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
Answers
C.
Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects.
C.
Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects.
Answers
D.
Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user's objects to that bucket.
D.
Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user's objects to that bucket.
Answers
E.
Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.
E.
Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.
Answers
Suggested answer: C, E

An application runs on Amazon EC2 instances behind an Application Load Balancer. Amazon RDS MySQL is used on the backend. The instances run in an Auto Scaling group across multiple Availability Zones. The Application Load Balancer health check ensures the web servers are operating and able to make read/write SQL connections. Amazon Route 53 provides DNS functionality with a record pointing to the Application Load Balancer. A new policy requires a geographically isolated disaster recovery site with an RTO of 4 hours and an RPO of 15 minutes. Which disaster recovery strategy will require the LEAST amount of changes to the application stack?

A.
Launch a replica stack of everything except RDS in a different Availability Zone. Create an RDS read-only replica in a new Availability Zone and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a failover routing policy.
A.
Launch a replica stack of everything except RDS in a different Availability Zone. Create an RDS read-only replica in a new Availability Zone and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a failover routing policy.
Answers
B.
Launch a replica stack of everything except RDS in a different region. Create an RDS read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a latency routing policy.
B.
Launch a replica stack of everything except RDS in a different region. Create an RDS read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a latency routing policy.
Answers
C.
Launch a replica stack of everything except RDS in a different region. Upon failure, copy the snapshot over from the primary region to the disaster recovery region. Adjust the Amazon Route 53 record set to point to the disaster recovery region's Application Load Balancer.
C.
Launch a replica stack of everything except RDS in a different region. Upon failure, copy the snapshot over from the primary region to the disaster recovery region. Adjust the Amazon Route 53 record set to point to the disaster recovery region's Application Load Balancer.
Answers
D.
Launch a replica stack of everything except RDS in a different region. Create an RDS read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Amazon Route 53 record set with a failover routing policy.
D.
Launch a replica stack of everything except RDS in a different region. Create an RDS read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Amazon Route 53 record set with a failover routing policy.
Answers
Suggested answer: D

A DevOps engineer wants to deploy a serverless web application that is based on AWS Lambda. The deployment must meet the following requirements:

Provide staging and production environments.

Restrict developers from accessing the production environment.

Avoid hardcoding passwords in the Lambda functions.

Store source code in AWS CodeCommit.

Use AWS CodePipeline to automate the deployment.

What is the MOST operationally efficient solution that meets these requirements?

A.
Create separate staging and production accounts to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
A.
Create separate staging and production accounts to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
Answers
B.
Create separate staging and production accounts to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
B.
Create separate staging and production accounts to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
Answers
C.
Define tagging conventions for staging and production environments to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
C.
Define tagging conventions for staging and production environments to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
Answers
D.
Define tagging conventions for staging and production environments to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
D.
Define tagging conventions for staging and production environments to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
Answers
Suggested answer: A


An application is deployed on Amazon EC2 instances running in an Auto Scaling group. During the bootstrapping process, the instances register their private IP addresses with a monitoring system. The monitoring system performs health checks frequently by sending ping requests to those IP addresses and sending alerts if an instance becomes non-responsive. The existing deployment strategy replaces the current EC2 instances with new ones. A DevOps Engineer has noticed that the monitoring system is sending false alarms during a deployment, and is tasked with stopping these false alarms. Which solution will meet these requirements without affecting the current deployment method?

A.
Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke Amazon SNS to send a message to the Systems Administrator group for remediation.
A.
Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke Amazon SNS to send a message to the Systems Administrator group for remediation.
Answers
B.
Define an AWS Lambda function and a lifecycle hook attached to the Auto Scaling group. Configure the lifecycle hook to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
B.
Define an AWS Lambda function and a lifecycle hook attached to the Auto Scaling group. Configure the lifecycle hook to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
Answers
C.
Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
C.
Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
Answers
D.
Define an AWS Lambda function that will run a script when instance termination occurs in an Auto Scaling group. The script will remove the entry of the private IP from the monitoring system.
D.
Define an AWS Lambda function that will run a script when instance termination occurs in an Auto Scaling group. The script will remove the entry of the private IP from the monitoring system.
Answers
Suggested answer: C

Explanation:

Reference: https://aws.amazon.com/blogs/compute/using-aws-lambda-with-auto-scaling-lifecycle-hooks/

When specifying multiple variable names and values for a playbook on the command line, which of the following is the correct syntax?

A.
ansible-playbook playbook.yml -e `host="foo" pkg="bar"'
A.
ansible-playbook playbook.yml -e `host="foo" pkg="bar"'
Answers
B.
ansible-playbook playbook.yml -e `host: "foo", pkg: "bar"'
B.
ansible-playbook playbook.yml -e `host: "foo", pkg: "bar"'
Answers
C.
ansible-playbook playbook.yml -e `host="foo"' -e `pkg="bar"'
C.
ansible-playbook playbook.yml -e `host="foo"' -e `pkg="bar"'
Answers
D.
ansible-playbook playbook.yml --extra-vars "host=foo", "pkg=bar"
D.
ansible-playbook playbook.yml --extra-vars "host=foo", "pkg=bar"
Answers
Suggested answer: A

Explanation:

Variables are passed in a single command line parameter, `-e' or `--extra-vars'. They are sent as a single string to the playbook and are space delimited. Because of the space delimeter, variable values must be encapsulated in quotes. Additionally, proper JSON or YAML can be passed, such as: `-e `{"key": "name", "array": ["value1", "value2"]}'.

Reference: http://docs.ansible.com/ansible/playbooks_variables.html#passing-variables-on-the-commandline

Your company releases new features with high frequency while demanding high application availability. As part of the application's A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near realtime, to ensure that the application is working flawlessly after each deployment. If the logs show arty anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

A.
Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
A.
Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
Answers
B.
Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
B.
Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
Answers
C.
Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
C.
Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
Answers
D.
Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.
D.
Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.
Answers
E.
Store the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis.
E.
Store the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis.
Answers
Suggested answer: C

You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be costeffective, given the large volume of logs.

What technique should you use to meet these requirements?

A.
Store your log in Amazon CloudWatch Logs.
A.
Store your log in Amazon CloudWatch Logs.
Answers
B.
Store your logs in Amazon Glacier.
B.
Store your logs in Amazon Glacier.
Answers
C.
Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.
C.
Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.
Answers
D.
Store your logs in HDFS on an Amazon EMR cluster.
D.
Store your logs in HDFS on an Amazon EMR cluster.
Answers
E.
Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.
E.
Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.
Answers
Suggested answer: C

A Security team requires all Amazon EBS volumes that are attached to an Amazon EC2 instance to have AWS Key Management Service (AWS KMS) encryption enabled. If encryption is not enabled, the company's policy requires the EBS volume to be detached and deleted. A DevOps Engineer must automate the detection and deletion of unencrypted EBS volumes. Which method should the Engineer use to accomplish this with the LEAST operational effort?

A.
Create an Amazon CloudWatch Events rule that invokes an AWS Lambda function when an EBS volume is created. The Lambda function checks the EBS volume for encryption. If encryption is not enabled and the volume is attached to an instance, the function deletes the volume.
A.
Create an Amazon CloudWatch Events rule that invokes an AWS Lambda function when an EBS volume is created. The Lambda function checks the EBS volume for encryption. If encryption is not enabled and the volume is attached to an instance, the function deletes the volume.
Answers
B.
Create an AWS Lambda function to describe all EBS volumes in the region and identify volumes that are attached to an EC2 instance without encryption enabled. The function then deletes all noncompliant volumes. The AWS Lambda function is invoked every 5 minutes by an Amazon CloudWatch Events scheduled rule.
B.
Create an AWS Lambda function to describe all EBS volumes in the region and identify volumes that are attached to an EC2 instance without encryption enabled. The function then deletes all noncompliant volumes. The AWS Lambda function is invoked every 5 minutes by an Amazon CloudWatch Events scheduled rule.
Answers
C.
Create a rule in AWS Config to check for unencrypted and attached EBS volumes. Subscribe an AWS Lambda function to the Amazon SNS topic that AWS Config sends change notifications to. The Lambda function checks the change notification and deletes any EBS volumes that are non-compliant.
C.
Create a rule in AWS Config to check for unencrypted and attached EBS volumes. Subscribe an AWS Lambda function to the Amazon SNS topic that AWS Config sends change notifications to. The Lambda function checks the change notification and deletes any EBS volumes that are non-compliant.
Answers
D.
Launch an EC2 instance with an IAM role that has permissions to describe and delete volumes. Run a script on the EC2 instance every 5 minutes to describe all EBS volumes in all regions and identify volumes that are attached without encryption enabled. The script then deletes those volumes.
D.
Launch an EC2 instance with an IAM role that has permissions to describe and delete volumes. Run a script on the EC2 instance every 5 minutes to describe all EBS volumes in all regions and identify volumes that are attached without encryption enabled. The script then deletes those volumes.
Answers
Suggested answer: C
Total 557 questions
Go to page: of 56