ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions












A development team manages website deployments using AWS CodeDeploy blue/green deployments. The application is running on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. When deploying a new revision, the team notices the deployment eventually fails, but it takes a long time to fail. After further inspection, the team discovers the AllowTraffic lifecycle event ran for an hour and eventually failed without providing any other information. The team wants to ensure failure notices are delivered more quickly while maintaining application availability even upon failure. Which combination of actions should be taken to meet these requirements? (Choose two.)

A.
Change the deployment configuration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time.
A.
Change the deployment configuration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time.
Answers
B.
Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected.
B.
Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected.
Answers
C.
Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy.
C.
Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy.
Answers
D.
Use the appspec.yml file to run a script on the AllowTraffic hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass.
D.
Use the appspec.yml file to run a script on the AllowTraffic hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass.
Answers
E.
Use the appspec,yml file to run a script on the BeforeAllowTraffic hook to perform hearth checks on the application and fail the deployment if the health checks performed by the script are not successful.
E.
Use the appspec,yml file to run a script on the BeforeAllowTraffic hook to perform hearth checks on the application and fail the deployment if the health checks performed by the script are not successful.
Answers
Suggested answer: A, C

A government agency is storing highly confidential files in an encrypted Amazon S3 bucket. The agency has configured federated access and has allowed only a particular on-premises Active Directory user group to access this bucket. The agency wants to maintain audit records and automatically detect and revert any accidental changes administrators make to the IAM policies used for providing this restricted federated access. Which of the following options provide the FASTEST way to meet these requirements?

A.
Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change.
A.
Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change.
Answers
B.
Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change.
B.
Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change.
Answers
C.
Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes.
C.
Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes.
Answers
D.
Restrict administrators in the on-premises Active Directory from changing the IAM policies.
D.
Restrict administrators in the on-premises Active Directory from changing the IAM policies.
Answers
Suggested answer: B

A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the occurrence.

Which solution will meet these requirements?

A.
Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
A.
Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
Answers
B.
Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
B.
Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
Answers
C.
Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
C.
Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
Answers
D.
Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
D.
Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
Answers
Suggested answer: B

What is the purpose of a Docker swarm worker node?

A.
scheduling services
A.
scheduling services
Answers
B.
service swarm node HTTP API endpoints
B.
service swarm node HTTP API endpoints
Answers
C.
executing containers
C.
executing containers
Answers
D.
maintaining cluster state
D.
maintaining cluster state
Answers
Suggested answer: C

Explanation:

Manager nodes handle cluster management tasks: maintaining cluster state scheduling services serving swarm mode HTTP API endpoints Worker nodes Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers. Worker nodes don't participate in the Raft distributed state, make scheduling decisions, or serve the swarm mode HTTP API.

Reference: https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/#worker-nodes

What is AWS CloudTrail Processing Library?

A.
A static library with CloudTrail log files in a movable format machine code that is directly executable
A.
A static library with CloudTrail log files in a movable format machine code that is directly executable
Answers
B.
An object library with CloudTrail log files in a movable format machine code that is usually not directly executable
B.
An object library with CloudTrail log files in a movable format machine code that is usually not directly executable
Answers
C.
A Java library that makes it easy to build an application that reads and processes CloudTrail log files
C.
A Java library that makes it easy to build an application that reads and processes CloudTrail log files
Answers
D.
A PHP library that renders various generic containers needed for CloudTrail log files
D.
A PHP library that renders various generic containers needed for CloudTrail log files
Answers
Suggested answer: C

Explanation:

AWS CloudTrail Processing Library is a Java library that makes it easy to build an application that reads and processes CloudTrail log files. You can download CloudTrail Processing Library from GitHub.

Reference: http://aws.amazon.com/cloudtrail/faqs/

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance. When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region. How should the company meet these requirements with the LEAST amount of application changes?

A.
Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
A.
Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
Answers
B.
Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
B.
Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
Answers
C.
Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
C.
Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
Answers
D.
Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
D.
Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Answers
Suggested answer: C

You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?

A.
Model an AWS EMR job in AWS Elastic Beanstalk.
A.
Model an AWS EMR job in AWS Elastic Beanstalk.
Answers
B.
Model an AWS EMR job in AWS CloudFormation.
B.
Model an AWS EMR job in AWS CloudFormation.
Answers
C.
Model an AWS EMR job in AWS OpsWorks.
C.
Model an AWS EMR job in AWS OpsWorks.
Answers
D.
Model an AWS EMR job in AWS CLI Composer.
D.
Model an AWS EMR job in AWS CLI Composer.
Answers
Suggested answer: B

Explanation:

To declaratively model build and destroy of a cluster, you need to use AWS CloudFormation. OpsWorks and Elastic Beanstalk cannot directly model EMR Clusters. The CLI is not declarative, and CLI Composer does not exist.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-emrcluster.html

You are responsible for a large-scale video transcoding system that operates with an Auto Scaling group of video transcoding workers. The Auto Scaling group is configured with a minimum of 750 Amazon EC2 instances and a maximum of 1000 Amazon EC2 instances. You are using Amazon SQS to pass a message containing the URI for a video stored in Amazon S3 to the transcoding workers. An Amazon CloudWatch alarm has notified you that the queue depth is becoming very large. How can you resolve the alarm without the risk of increasing the time to transcode videos? (Choose two.)

A.
Create a second queue in Amazon SQS.
A.
Create a second queue in Amazon SQS.
Answers
B.
Adjust the Amazon CloudWatch alarms for a higher queue depth.
B.
Adjust the Amazon CloudWatch alarms for a higher queue depth.
Answers
C.
Create a new Auto Scaling group with a launch configuration that has a larger Amazon EC2 instance type.
C.
Create a new Auto Scaling group with a launch configuration that has a larger Amazon EC2 instance type.
Answers
D.
Add an additional Availability Zone to the Auto Scaling group configuration.
D.
Add an additional Availability Zone to the Auto Scaling group configuration.
Answers
E.
Change the Amazon CloudWatch alarm so that it monitors the CPU utilization of the Amazon EC2 instances rather than the Amazon SQS queue depth.
E.
Change the Amazon CloudWatch alarm so that it monitors the CPU utilization of the Amazon EC2 instances rather than the Amazon SQS queue depth.
Answers
F.
Adjust the Auto Scaling group configuration to increase the maximum number of Amazon EC2 instances.
F.
Adjust the Auto Scaling group configuration to increase the maximum number of Amazon EC2 instances.
Answers
Suggested answer: C, F

A company runs an application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones in us-east-1. The application stores data in an Amazon RDS MySQL Multi-AZ DB instance.

A DevOps engineer wants to modify the current solution and create a hot standby of the environment in another region to minimize downtime if a problem occurs in us-east-1. Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)

A.
Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.
A.
Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.
Answers
B.
Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.
B.
Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.
Answers
C.
Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.
C.
Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.
Answers
D.
Enable multi-region failover for the RDS configuration for the database instance.
D.
Enable multi-region failover for the RDS configuration for the database instance.
Answers
E.
Deploy a read replica of the RDS instance in the disaster recovery region.
E.
Deploy a read replica of the RDS instance in the disaster recovery region.
Answers
F.
Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.
F.
Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.
Answers
Suggested answer: A, B, E

A developer has written an application that writes data to Amazon DynamoDB. The DynamoDB table has been configured to use conditional writes. During peak usage times, writes are failing due to a ConditionalCheckFailedException error.

How can the developer increase the application’s reliability when multiple clients are attempting to write to the same record?

A.
Write the data to an Amazon SNS topic.
A.
Write the data to an Amazon SNS topic.
Answers
B.
Increase the amount of write capacity for the table to anticipate short-term spikes or bursts in write operations.
B.
Increase the amount of write capacity for the table to anticipate short-term spikes or bursts in write operations.
Answers
C.
Implement a caching solution, such as DynamoDB Accelerator or Amazon ElastiCache.
C.
Implement a caching solution, such as DynamoDB Accelerator or Amazon ElastiCache.
Answers
D.
Implement error retries and exponential backoff with jitter.
D.
Implement error retries and exponential backoff with jitter.
Answers
Suggested answer: D

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-table-throttled/

Total 557 questions
Go to page: of 56