ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions











A company is implementing an Amazon ECS cluster to run its workload. The company architecture will run multiple ECS services on the cluster, with an Application Load Balancer on the front end, using multiple target groups to route traffic. The Application Development team has been struggling to collect logs that must be collected and sent to an Amazon S3 bucket for near-real time analysis What must the DevOps Engineer configure in the deployment to meet these requirements?

(Choose three.)

A.
Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to 'awslogs'.
A.
Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to 'awslogs'.
Answers
B.
Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
B.
Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
Answers
C.
Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the createexport -task CloudWatch Logs command, then point the output to the logging S3 bucket.
C.
Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the createexport -task CloudWatch Logs command, then point the output to the logging S3 bucket.
Answers
D.
Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
D.
Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
Answers
E.
Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
E.
Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
Answers
F.
Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis.
F.
Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis.
Answers
Suggested answer: A, D, F

To override an allow in an IAM policy, you set the Effect element to ______.

A.
Block
A.
Block
Answers
B.
Stop
B.
Stop
Answers
C.
Deny
C.
Deny
Answers
D.
Allow
D.
Allow
Answers
Suggested answer: C

Explanation:

By default, access to resources is denied. To allow access to a resource, you must set the Effect element to Allow. To override an allow (for example, to override an allow that is otherwise in force), you set the Effect element to Deny.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

A DevOps engineer is troubleshooting deployments to a new application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. Instances sometimes come online before they are ready, which is leading to increased error rates among users. The current health check configuration gives instances a 60second grace period and considers instances healthy after two 200 response codes from /index.php, a page that may respond intermittently during the deployment process. The development team wants instances to come online as soon as possible. Which strategy would address this issue?

A.
Increase the instance grace period from 60 seconds to 180 seconds, and the consecutive health check requirement from 2 to 3.
A.
Increase the instance grace period from 60 seconds to 180 seconds, and the consecutive health check requirement from 2 to 3.
Answers
B.
Increase the instance grace period from 60 second to 120 seconds, and change the response code requirement from 200 to 204.
B.
Increase the instance grace period from 60 second to 120 seconds, and change the response code requirement from 200 to 204.
Answers
C.
Modify the deployment script to create a /health-check.php file when the deployment begins, then modify the health check path to point to that file.
C.
Modify the deployment script to create a /health-check.php file when the deployment begins, then modify the health check path to point to that file.
Answers
D.
Modify the deployment script to create a /health-check.php file when all tasks are complete, then modify the health check path to point to that file.
D.
Modify the deployment script to create a /health-check.php file when all tasks are complete, then modify the health check path to point to that file.
Answers
Suggested answer: A

Company policies require that information about IP traffic going between instances in the production Amazon VPC is captured. The capturing mechanism must always be enabled and the Security team must be notified when any changes in configuration occur. What should be done to ensure that these requirements are met?

A.
Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. The output of the tool is sent to Amazon EFS for aggregation and querying. In addition, scheduling an Amazon CloudWatch Events rule calls an AWS Lambda function to check whether tcpdump is up and running and sends an email to the security organization when there is an exception.
A.
Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. The output of the tool is sent to Amazon EFS for aggregation and querying. In addition, scheduling an Amazon CloudWatch Events rule calls an AWS Lambda function to check whether tcpdump is up and running and sends an email to the security organization when there is an exception.
Answers
B.
Create a flow log for the production VPC and assign an Amazon S3 bucket as a destination for delivery. Using Amazon S3 Event Notification, set up an AWS Lambda function that is triggered when a new log file gets delivered. This Lambda function updates an entry in Amazon DynamoDB, which is periodically checked by scheduling an Amazon CloudWatch Events rule to notify security when logs have not arrived.
B.
Create a flow log for the production VPC and assign an Amazon S3 bucket as a destination for delivery. Using Amazon S3 Event Notification, set up an AWS Lambda function that is triggered when a new log file gets delivered. This Lambda function updates an entry in Amazon DynamoDB, which is periodically checked by scheduling an Amazon CloudWatch Events rule to notify security when logs have not arrived.
Answers
C.
Create a flow log for the production VPCreate a new rule using AWS Config that is triggered by configuration changes of resources of type ‘EC2:VPC’. As part of configuring the rule, create an AWS Lambda function that looks up flow logs for a given VPIf the VPC flow logs are not configured, return a ‘NON_COMPLIANT’ status and notify the security organization.
C.
Create a flow log for the production VPCreate a new rule using AWS Config that is triggered by configuration changes of resources of type ‘EC2:VPC’. As part of configuring the rule, create an AWS Lambda function that looks up flow logs for a given VPIf the VPC flow logs are not configured, return a ‘NON_COMPLIANT’ status and notify the security organization.
Answers
D.
Configure a new trail using AWS CloudTrail service. Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. Connect Amazon Athena to the CloudTrail and write an AWS Lambda function that monitors for a flow log disable event. Once the CloudTrail entry has been spotted, alert the security organization.
D.
Configure a new trail using AWS CloudTrail service. Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. Connect Amazon Athena to the CloudTrail and write an AWS Lambda function that monitors for a flow log disable event. Once the CloudTrail entry has been spotted, alert the security organization.
Answers
Suggested answer: C

When a user is detaching an EBS volume from a running instance and attaching it to a new instance, which of the below mentioned options should be followed to avoid file system damage?

A.
Unmount the volume first
A.
Unmount the volume first
Answers
B.
Stop all the I/O of the volume before processing
B.
Stop all the I/O of the volume before processing
Answers
C.
Take a snapshot of the volume before detaching
C.
Take a snapshot of the volume before detaching
Answers
D.
Force Detach the volume to ensure that all the data stays intact
D.
Force Detach the volume to ensure that all the data stays intact
Answers
Suggested answer: A

Explanation:

When a user is trying to detach an EBS volume, the user can either terminate the instance or explicitly remove the volume. It is a recommended practice to unmount the volume first to avoid any file system damage.

To run an application, a DevOps Engineer launches an Amazon EC2 instances with public IP addresses in a public subnet. A user data script obtains the application artifacts and installs them on the instances upon launch. A change to the security classification of the application now requires the instances to run with no access to the Internet. While the instances launch successfully and show as healthy, the application does not seem to be installed.

Which of the following should successfully install the application while complying with the new rule?

A.
Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
A.
Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
Answers
B.
Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
B.
Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
Answers
C.
Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
C.
Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
Answers
D.
Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.
D.
Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.
Answers
Suggested answer: C

Which of these is not an intrinsic function in AWS CloudFormation?

A.
Fn::Split
A.
Fn::Split
Answers
B.
Fn::FindInMap
B.
Fn::FindInMap
Answers
C.
Fn::Select
C.
Fn::Select
Answers
D.
Fn::GetAZs
D.
Fn::GetAZs
Answers
Suggested answer: A

Explanation:

This is the complete list of Intrinsic Functions...: Fn::Base64, Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Select Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/ UserGuide/intrinsic-functionreference.html

What are the default memory limit policies for a Docker container?

A.
Limited memory, limited kernel memory
A.
Limited memory, limited kernel memory
Answers
B.
Unlimited memory, limited kernel memory
B.
Unlimited memory, limited kernel memory
Answers
C.
Limited memory, unlimited kernel memory
C.
Limited memory, unlimited kernel memory
Answers
D.
Unlimited memory, unlimited kernel memory
D.
Unlimited memory, unlimited kernel memory
Answers
Suggested answer: D

Explanation:

Kernel memory limits are expressed in terms of the overall memory allocated to a container. Consider the following scenarios: Unlimited memory, unlimited kernel memory: This is the default behavior. Unlimited memory, limited kernel memory: This is appropriate when the amount of memory needed by all cgroups is greater than the amount of memory that actually exists on the host machine. You can configure the kernel memory to never go over what is available on the host machine, and containers which need more memory need to wait for it. Limited memory, umlimited kernel memory: The overall memory is limited, but the kernel memory is not. Limited memory, limited kernel memory: Limiting both user and kernel memory can be useful for debugging memory-related problems. If a container is using an unexpected amount of either type of memory, it will run out of memory without affecting other containers or the host machine. Within this setting, if the kernel memory limit is lower than the user memory limit, running out of kernel memory will cause the container to experience an OOM error. If the kernel memory limit is higher than the user memory limit, the kernel limit will not cause the container to experience an OOM.

Reference:

https://docs.docker.com/engine/admin/resource_constraints/#--kernel-memory-details

A DevOps engineer is designing a multi-Region disaster recovery strategy for an application requiring an RPO of 1 hour and RTO of 4 hours. The application is deployed with an AWS CloudFormation template that creates an Application Load Balancer, Amazon EC2 instances in an Auto Scaling group, and an Amazon RDS Multi-AZ DB instance with 20 GB of allocated storage. The AMI of the application instance does not contain data and has been copied to the destination Region.

Which combination of actions will satisfy the recovery objectives at the LOWEST cost? (Choose two.)

A.
Launch an RDS DB instance in the failover Region and use AWS DMS to configure ongoing replication from the source database.
A.
Launch an RDS DB instance in the failover Region and use AWS DMS to configure ongoing replication from the source database.
Answers
B.
Schedule an AWS Lambda function to take a snapshot of the database every hour and copy the snapshot to the failover Region.
B.
Schedule an AWS Lambda function to take a snapshot of the database every hour and copy the snapshot to the failover Region.
Answers
C.
Upon failover, update the CloudFormation stack in the failover Region to update the Auto Scaling group from one running instance to the desired number of instances. When the stack update is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
C.
Upon failover, update the CloudFormation stack in the failover Region to update the Auto Scaling group from one running instance to the desired number of instances. When the stack update is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
Answers
D.
Upon failover, launch the CloudFormation template in the failover Region with the snapshot ID as an input parameter. When the stack creation is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
D.
Upon failover, launch the CloudFormation template in the failover Region with the snapshot ID as an input parameter. When the stack creation is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
Answers
E.
Utilizing the build-in RDS automated backups, set up an event with Amazon CloudWatch Events that triggers an AWS Lambda function to copy the snapshot to the failover Region.
E.
Utilizing the build-in RDS automated backups, set up an event with Amazon CloudWatch Events that triggers an AWS Lambda function to copy the snapshot to the failover Region.
Answers
Suggested answer: D, E

An application has microservices spread across different AWS accounts and is integrated with an on-premises legacy system for some of its functionality. Because of the segmented architecture and missing logs, every time the application experiences issues, it is taking too long to gather the logs to identify the issues. A DevOps Engineer must fix the log aggregation process and provide a way to centrally analyze the logs. Which is the MOST efficient and cost-effective solution?

A.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
A.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
Answers
B.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
B.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
Answers
C.
Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
C.
Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
Answers
D.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.
D.
Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.
Answers
Suggested answer: D
Total 557 questions
Go to page: of 56