ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 38

Question list
Search
Search

List of questions

Search

Related questions











A DevOps Engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group. The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform in- place deployments with CodeDeployDefault.OneAtATime. During an ongoing new deployment, the Engineer discovers that, although the overall deployment finished successfully, two out of five instances have the previous application revision deployed. The other three instances have the newest application revision. What is likely causing this issue?

A.
The two affected instances failed to fetch the new deployment.
A.
The two affected instances failed to fetch the new deployment.
Answers
B.
A failed AfterInstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances.
B.
A failed AfterInstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances.
Answers
C.
The CodeDeploy agent was not installed in two affected instances.
C.
The CodeDeploy agent was not installed in two affected instances.
Answers
D.
EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
D.
EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
Answers
Suggested answer: D

A DevOps team needs to query information in application logs that are generated by an application running multiple Amazon EC2 instances deployed with AWS Elastic Beanstalk. Instance log streaming to Amazon CloudWatch Logs was enabled on Elastic Beanstalk. Which approach would be the MOST cost-efficient?

A.
Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
A.
Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
Answers
B.
Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.
B.
Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.
Answers
C.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
C.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
Answers
D.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.
D.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehose stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.
Answers
Suggested answer: C

A company updated the AWS CloudFormation template for a critical business application. The stack update process failed due to an error in the updated template, and AWS CloudFormation automatically began the stack rollback process. Later, a DevOps engineer discovered that the application was still unavailable and that the stack was in the UPDATE_ROLLBACK_FAILED state. Which combination of actions should the DevOps engineer perform so that the stack rollback can complete successfully? (Choose two.)

A.
Attach the AWSCloudFormationFullAccess IAM policy to the AWS CloudFormation role.
A.
Attach the AWSCloudFormationFullAccess IAM policy to the AWS CloudFormation role.
Answers
B.
Automatically recover the stack resources using AWS CloudFormation drift detection.
B.
Automatically recover the stack resources using AWS CloudFormation drift detection.
Answers
C.
Issue a ContinueUpdateRollback command from the AWS CloudFormation console or the AWS CLI.
C.
Issue a ContinueUpdateRollback command from the AWS CloudFormation console or the AWS CLI.
Answers
D.
Manually adjust the resources to match the expectations of the stack.
D.
Manually adjust the resources to match the expectations of the stack.
Answers
E.
Update the existing AWS CloudFormation stack using the original template.
E.
Update the existing AWS CloudFormation stack using the original template.
Answers
Suggested answer: C, D

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-update-rollback-failed/

Which statement is true about configuring proxy support for Amazon Inspector agent on a Windows-based system?

A.
Amazon Inspector agent supports proxy usage on Windows-based systems through the use of the WinHTTP proxy.
A.
Amazon Inspector agent supports proxy usage on Windows-based systems through the use of the WinHTTP proxy.
Answers
B.
Amazon Inspector agent supports proxy usage on Linux-based systems but not on Windows.
B.
Amazon Inspector agent supports proxy usage on Linux-based systems but not on Windows.
Answers
C.
Amazon Inspector proxy support on Windows-based systems is achieved through installing proxy-enabled version of the agent which comes with preconfigured files that you need to edit to match your environment.
C.
Amazon Inspector proxy support on Windows-based systems is achieved through installing proxy-enabled version of the agent which comes with preconfigured files that you need to edit to match your environment.
Answers
D.
Amazon Inspector agent supports proxy usage on Windows-based systems through awsagent.env configuration file.
D.
Amazon Inspector agent supports proxy usage on Windows-based systems through awsagent.env configuration file.
Answers
Suggested answer: A

Explanation:

Proxy support for AWS agents is achieved through the use of the WinHTTP proxy.

Reference:

https://docs.aws.amazon.com/inspector/latest/userguide/inspector_agents-on-win.html#inspectoragent-proxy

A company has built a web service that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company has deployed the application in us-east-1. Amazon Route 53 provides an external DNS that routes traffic from example.com to the application, created with appropriate health checks.

The company has deployed a second environment for the application in eu-west-1. The company wants traffic to be routed to whichever environment results in the best response time for each user. If there is an outage in one Region, traffic should be directed to the other environment.

Which configuration will achieve these requirements?

A.
A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1. Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1. Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
A.
A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1. Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1. Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
Answers
B.
A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target. Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target. Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
B.
A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target. Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target. Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
Answers
C.
A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary. Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary. Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
C.
A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary. Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary. Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
Answers
D.
A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second. Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second. Failover routing records for example.com that are aliased to us.example.com and eu.example.com.
D.
A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second. Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second. Failover routing records for example.com that are aliased to us.example.com and eu.example.com.
Answers
Suggested answer: C

You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue?

A.
Some of the new jobs coming in are malformed and unprocessable.
A.
Some of the new jobs coming in are malformed and unprocessable.
Answers
B.
The routing tables changed and none of the workers can process events anymore.
B.
The routing tables changed and none of the workers can process events anymore.
Answers
C.
Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue.
C.
Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue.
Answers
D.
The scaling metric is not functioning correctly.
D.
The scaling metric is not functioning correctly.
Answers
Suggested answer: A

Explanation:

The IAM Role must be fine, as if it were broken, NO jobs would be processed since the system would never be able to get any queue messages. The same reasoning applies to the routing table change. The scaling metric is fine, as instance count increased when the queue depth increased due to more messages entering than exiting. Thus, the only reasonable option is that some of the recent messages must be malformed and unprocessable.

Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

A.
Your API Gateway deployment is throttling your requests.
A.
Your API Gateway deployment is throttling your requests.
Answers
B.
Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
B.
Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
Answers
C.
You did not request a limit increase on concurrent Lambda function executions.
C.
You did not request a limit increase on concurrent Lambda function executions.
Answers
D.
You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.
D.
You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.
Answers
Suggested answer: C

Explanation:

AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console. AWS Lambda: Concurrent requests safety throttle per account -> 100.

Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_lambda

A DevOps Engineer is working with an application deployed to 12 Amazon EC2 instances across 3 Availability Zones. New instances can be started from an AMI image. On a typical day, each EC2 instance has 30% utilization during business hours and 10% utilization after business hours. The CPU utilization has an immediate spike in the first few minutes of business hours. Other increases in CPU utilization rise gradually. The Engineer has been asked to reduce costs while retaining the same or higher reliability.

Which solution meets these requirements?

A.
Create two Amazon CloudWatch Events rules with schedules before and after business hours begin and end. Create two AWS Lambda functions, one invoked by each rule. The first function should stop nine instances after business hours end, the second function should restart the nine instances before the business day begins.
A.
Create two Amazon CloudWatch Events rules with schedules before and after business hours begin and end. Create two AWS Lambda functions, one invoked by each rule. The first function should stop nine instances after business hours end, the second function should restart the nine instances before the business day begins.
Answers
B.
Create an Amazon EC2 Auto Scaling group using the AMI image, with a scaling action based on the Auto Scaling group’s CPU Utilization average with a target of 75%. Create a scheduled action for the group to adjust the minimum number of instances to three after business hours end and reset to six before business hours begin.
B.
Create an Amazon EC2 Auto Scaling group using the AMI image, with a scaling action based on the Auto Scaling group’s CPU Utilization average with a target of 75%. Create a scheduled action for the group to adjust the minimum number of instances to three after business hours end and reset to six before business hours begin.
Answers
C.
Create two Amazon CloudWatch Events rules with schedules before and after business hours begin and end. Create an AWS CloudFormation stack, which creates an EC2 Auto Scaling group, with a parameter for the number of instances. Invoke the stack from each rule, passing a parameter value of three in the morning, and six in the evening.
C.
Create two Amazon CloudWatch Events rules with schedules before and after business hours begin and end. Create an AWS CloudFormation stack, which creates an EC2 Auto Scaling group, with a parameter for the number of instances. Invoke the stack from each rule, passing a parameter value of three in the morning, and six in the evening.
Answers
D.
Create an EC2 Auto Scaling group using the AMI image, with a scaling action based on the Auto Scaling group’s CPU Utilization average with a target of 75%. Create a scheduled action to terminate nine instances each evening after the close of business.
D.
Create an EC2 Auto Scaling group using the AMI image, with a scaling action based on the Auto Scaling group’s CPU Utilization average with a target of 75%. Create a scheduled action to terminate nine instances each evening after the close of business.
Answers
Suggested answer: B

After conducting a disaster recovery exercise, an Enterprise Architect discovers that a large team of Database and Storage Administrators need more than seven hours of manual effort to make a flagship application's database functional in a different AWS Region. The Architect also discovers that the recovered database is often missing as much as two hours of data transactions. Which solution provides improved RTO and RPO in a cross-region failover scenario?

A.
Deploy an Amazon RDS Multi-AZ instance backed by a multi-region Amazon EFS. Configure the RDS option group to enable multi-region availability for native automation of cross-region recovery and continuous data replication. Create an Amazon SNS topic subscribed to RDS-impacted events to send emails to the Database Administration team when significant query Latency is detected in a single Availability Zone.
A.
Deploy an Amazon RDS Multi-AZ instance backed by a multi-region Amazon EFS. Configure the RDS option group to enable multi-region availability for native automation of cross-region recovery and continuous data replication. Create an Amazon SNS topic subscribed to RDS-impacted events to send emails to the Database Administration team when significant query Latency is detected in a single Availability Zone.
Answers
B.
Use Amazon SNS topics to receive published messages from Amazon RDS availability and backup events. Use AWS Lambda for three separate functions with calls to Amazon RDS to snapshot a database instance, create a cross- region snapshot copy, and restore an instance from a snapshot. Use a scheduled Amazon CloudWatch Events rule at a frequency matching the RPO to trigger the Lambda function to snapshot a database instance. Trigger the Lambda function to create a cross-region snapshot copy when the SNS topic for backup events receives a new message. Configure the Lambda function to restore an instance from a snapshot to trigger sending new messages published to the availability SNS topic.
B.
Use Amazon SNS topics to receive published messages from Amazon RDS availability and backup events. Use AWS Lambda for three separate functions with calls to Amazon RDS to snapshot a database instance, create a cross- region snapshot copy, and restore an instance from a snapshot. Use a scheduled Amazon CloudWatch Events rule at a frequency matching the RPO to trigger the Lambda function to snapshot a database instance. Trigger the Lambda function to create a cross-region snapshot copy when the SNS topic for backup events receives a new message. Configure the Lambda function to restore an instance from a snapshot to trigger sending new messages published to the availability SNS topic.
Answers
C.
Create a scheduled Amazon CloudWatch Events rule to make a call to Amazon RDS to create a snapshot from a database instance and specify a frequency to match the RPO. Create an AWS Step Functions task to call Amazon RDS to perform a cross-region snapshot copy into the failover region, and configure the state machine to execute the task when the RDS snapshot create state is complete. Create an SNS topic subscribed to RDS availability events, and push these messages to an Amazon SQS queue located in the failover region. Configure an Auto Scaling group of worker nodes to poll the queue for new messages and make a call to Amazon RDS to restore a database from a snapshot after a checksum on the cross-region copied snapshot returns valid.
C.
Create a scheduled Amazon CloudWatch Events rule to make a call to Amazon RDS to create a snapshot from a database instance and specify a frequency to match the RPO. Create an AWS Step Functions task to call Amazon RDS to perform a cross-region snapshot copy into the failover region, and configure the state machine to execute the task when the RDS snapshot create state is complete. Create an SNS topic subscribed to RDS availability events, and push these messages to an Amazon SQS queue located in the failover region. Configure an Auto Scaling group of worker nodes to poll the queue for new messages and make a call to Amazon RDS to restore a database from a snapshot after a checksum on the cross-region copied snapshot returns valid.
Answers
D.
Use Amazon RDS scheduled instance lifecycle events to create a snapshot and specify a frequency to match the RPO. Use Amazon RDS scheduled instance lifecycle event configuration to perform a cross-region snapshot copy into the failover region upon SnapshotCreateComplete events. Configure Amazon CloudWatch to alert when the CloudWatch RDS namespace CPUUtilization metric for the database instance falls to 0% and make a call to Amazon RDS to restore the database snapshot in the failover region.
D.
Use Amazon RDS scheduled instance lifecycle events to create a snapshot and specify a frequency to match the RPO. Use Amazon RDS scheduled instance lifecycle event configuration to perform a cross-region snapshot copy into the failover region upon SnapshotCreateComplete events. Configure Amazon CloudWatch to alert when the CloudWatch RDS namespace CPUUtilization metric for the database instance falls to 0% and make a call to Amazon RDS to restore the database snapshot in the failover region.
Answers
Suggested answer: B

A company is using AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline to deploy applications automatically to an Amazon EC2 instance. A DevOps Engineer needs to perform a security assessment scan of the operating system on every application deployment to the environment. How should this be automated?

A.
Use Amazon CloudWatch Events to monitor for Auto Scaling event notifications of new instances and configure CloudWatch Events to trigger an Amazon Inspector scan.
A.
Use Amazon CloudWatch Events to monitor for Auto Scaling event notifications of new instances and configure CloudWatch Events to trigger an Amazon Inspector scan.
Answers
B.
Use Amazon CloudWatch Events to monitor for AWS CodeDeploy notifications of a successful code deployment and configure CloudWatch Events to trigger an Amazon Inspector scan.
B.
Use Amazon CloudWatch Events to monitor for AWS CodeDeploy notifications of a successful code deployment and configure CloudWatch Events to trigger an Amazon Inspector scan.
Answers
C.
Use Amazon CloudWatch Events to monitor for CodePipeline notifications of a successful code deployment and configure CloudWatch Events to trigger an AWS X-Ray scan.
C.
Use Amazon CloudWatch Events to monitor for CodePipeline notifications of a successful code deployment and configure CloudWatch Events to trigger an AWS X-Ray scan.
Answers
D.
Use Amazon Inspector as a CodePipeline task after the successful use of CodeDeploy to deploy the code to the systems.
D.
Use Amazon Inspector as a CodePipeline task after the successful use of CodeDeploy to deploy the code to the systems.
Answers
Suggested answer: B
Total 557 questions
Go to page: of 56