ExamGecko

DOP-C01: AWS DevOps Engineer Professional

AWS DevOps Engineer Professional
Vendor:

Amazon

AWS DevOps Engineer Professional Exam Questions: 557
AWS DevOps Engineer Professional   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

Exam Number: DOP-C01

Exam Name: AWS DevOps Engineer Professional

Length of test: 180 mins

Exam Format: Multiple-choice questions.

Language Offered: English, Japanese, Korean, and Simplified Chinese

Number of questions in the actual exam: 75 questions

Passing Score: 750 points (approximately 56 out of 75 questions)

This certification is designed for individuals who have two or more years of experience provisioning, operating, and managing distributed application systems on AWS. It validates your technical expertise in automating security controls, governance processes, compliance validation, and defining and deploying monitoring, metrics, and logging systems on AWS.

Related questions

A company has a web application that uses AWS Elastic Beanstalk, Amazon S3, and Amazon DynamoDB to develop a web application. The web application has increased dramatically in popularity, resulting in unpredictable spikes in traffic. A DevOps Engineer has noted that 90% of the requests are duplicate read requests to the DynamoDB table and the images stored in an S3 bucket. How can the Engineer improve the performance of the website?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

After a daily scrum with your development teams, you've agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A company uses a series of individual Amazon CloudFormation templates to deploy its multi-Region applications. These templates must be deployed in a specific order. The company is making more changes to the templates than previously expected and wants to deploy new templates more efficiently. Additionally, the data engineering team must be notified of all changes to the templates. What should the company do to accomplish these goals?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A company hosts parts of a Python-based application using AWS Elastic Beanstalk. An Elastic Beanstalk CLI is being used to create and update the environments. The Operations team detected an increase in requests in one of the Elastic Beanstalk environments that caused downtime overnight. The team noted that the policy used for AWS Auto Scaling is NetworkOut. Based on load testing metrics, the team determined that the application needs to scale CPU utilization to improve the resilience of the environments. The team wants to implement this across all environments automatically. Following AWS recommendations, how should this automation be implemented?

A.
Using ebextensions, place a command within the container_commands key to perform an API call to modify the scaling metric to CPUUtilization for the Auto Scaling configuration. Use leader_only to execute this command in only the first instance launched within the environment.
A.
Using ebextensions, place a command within the container_commands key to perform an API call to modify the scaling metric to CPUUtilization for the Auto Scaling configuration. Use leader_only to execute this command in only the first instance launched within the environment.
Answers
B.
Using ebextensions, create a custom resource that modifies the AWSEBAutoScalingScaleUpPolicy and AWSEBAutoScalingScaleDownPolicy resources to use CPUUtilization as a metric to scale for the Auto Scaling group.
B.
Using ebextensions, create a custom resource that modifies the AWSEBAutoScalingScaleUpPolicy and AWSEBAutoScalingScaleDownPolicy resources to use CPUUtilization as a metric to scale for the Auto Scaling group.
Answers
C.
Using ebextensions, configure the option setting MeasureName to CPUUtilization within the aws:autoscaling:trigger namespace.
C.
Using ebextensions, configure the option setting MeasureName to CPUUtilization within the aws:autoscaling:trigger namespace.
Answers
D.
Using ebextensions, place a script within the files key and place it in /opt/elasticbeanstalk/hooks/appdeploy/pre to perform an API call to modify the scaling metric to CPUUtilization for the Auto Scaling configuration. Use leader_only to place this script in only the first instance launched within the environment.
D.
Using ebextensions, place a script within the files key and place it in /opt/elasticbeanstalk/hooks/appdeploy/pre to perform an API call to modify the scaling metric to CPUUtilization for the Auto Scaling configuration. Use leader_only to place this script in only the first instance launched within the environment.
Answers
Suggested answer: C
asked 16/09/2024
Medard Vedasto
38 questions

An application running on a set of Amazon EC2 instances in an Auto Scaling group requires a configuration file to operate. The instances are created and maintained with AWS CloudFormation. A DevOps engineer wants the instances to have the latest configuration file when launched, and wants changes to the configuration file to be reflected on all the instances with a minimal delay when the CloudFormation template is updated. Company policy requires that application configuration files be maintained along with AWS infrastructure configuration files in source control. Which solution will accomplish this?

A.
In the CloudFormation template, add an AWS Config rule. Place the configuration file content in the rule’s InputParameters property, and set the Scope property to the EC2 Auto Scaling group. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
A.
In the CloudFormation template, add an AWS Config rule. Place the configuration file content in the rule’s InputParameters property, and set the Scope property to the EC2 Auto Scaling group. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
Answers
B.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.
B.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.
Answers
C.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
C.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
Answers
D.
In the CloudFormation template, add Cloud Formation init metadata. Place the configuration file content in the metadata. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.
D.
In the CloudFormation template, add Cloud Formation init metadata. Place the configuration file content in the metadata. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.
Answers
Suggested answer: B
asked 16/09/2024
Aviv Beck
41 questions

A company has built a web service that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company has deployed the application in us-east-1. Amazon Route 53 provides an external DNS that routes traffic from example.com to the application, created with appropriate health checks.

The company has deployed a second environment for the application in eu-west-1. The company wants traffic to be routed to whichever environment results in the best response time for each user. If there is an outage in one Region, traffic should be directed to the other environment.

Which configuration will achieve these requirements?

A.
A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1. Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1. Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
A.
A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1. Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1. Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
Answers
B.
A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target. Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target. Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
B.
A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target. Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target. Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
Answers
C.
A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary. Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary. Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
C.
A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary. Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary. Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
Answers
D.
A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second. Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second. Failover routing records for example.com that are aliased to us.example.com and eu.example.com.
D.
A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second. Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second. Failover routing records for example.com that are aliased to us.example.com and eu.example.com.
Answers
Suggested answer: C
asked 16/09/2024
Martijn Lammerts
43 questions

An application has microservices spread across different AWS accounts and is integrated with an on-premises legacy system for some of its functionality. Because of the segmented architecture and missing logs, every time the application experiences issues, it is taking too long to gather the logs to identify the issues. A DevOps Engineer must fix the log aggregation process and provide a way to centrally analyze the logs. Which is the MOST efficient and cost-effective solution?

A.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
A.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
Answers
B.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
B.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
Answers
C.
Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
C.
Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
Answers
D.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.
D.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.
Answers
Suggested answer: D
asked 16/09/2024
Jordan Arribas Aranda
36 questions

An Application team is refactoring one of its internal tools to run in AWS instead of on-premises hardware. All of the code is currently written in Python and is standalone. There is also no external state store or relational database to be queried.

Which deployment pipeline incurs the LEAST amount of changes between development and production?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A company is implementing an Amazon ECS cluster to run its workload. The company architecture will run multiple ECS services on the cluster, with an Application Load Balancer on the front end, using multiple target groups to route traffic. The Application Development team has been struggling to collect logs that must be collected and sent to an Amazon S3 bucket for near-real time analysis What must the DevOps Engineer configure in the deployment to meet these requirements?

(Choose three.)

A.
Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to 'awslogs'.
A.
Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to 'awslogs'.
Answers
B.
Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
B.
Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
Answers
C.
Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the createexport -task CloudWatch Logs command, then point the output to the logging S3 bucket.
C.
Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the createexport -task CloudWatch Logs command, then point the output to the logging S3 bucket.
Answers
D.
Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
D.
Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
Answers
E.
Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
E.
Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
Answers
F.
Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis.
F.
Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis.
Answers
Suggested answer: A, D, F
asked 16/09/2024
Cesar Paredes
33 questions

A retail company wants to use AWS Elastic Beanstalk to host its online sales website running on Java. Since this will be the production website, the CTO has the following requirements for the deployment strategy:

Zero downtime. While the deployment is ongoing, the current Amazon EC2 instances in service should remain in service. No deployment or any other action should be performed on the EC2 instances because they serve production traffic. A new fleet of instances should be provisioned for deploying the new application version.

Once the new application version is deployed successfully in the new fleet of instances, the new instances should be placed in service and the old ones should be removed. The rollback should be as easy as possible. If the new fleet of instances fail to deploy the new application version, they should be terminated and the current instances should continue serving traffic as normal. The resources within the environment (EC2 Auto Scaling group, Elastic Load Balancing, Elastic Beanstalk DNS CNAME) should remain the same and no DNS change should be made. Which deployment strategy will meet the requirements?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member