ExamGecko
Home / Amazon / DOP-C01 / List of questions
Ask Question

Amazon DOP-C01 Practice Test - Questions Answers, Page 23

List of questions

Question 221

Report
Export
Collapse


A development team manages website deployments using AWS CodeDeploy blue/green deployments. The application is running on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. When deploying a new revision, the team notices the deployment eventually fails, but it takes a long time to fail. After further inspection, the team discovers the AllowTraffic lifecycle event ran for an hour and eventually failed without providing any other information. The team wants to ensure failure notices are delivered more quickly while maintaining application availability even upon failure. Which combination of actions should be taken to meet these requirements? (Choose two.)

Change the deployment configuration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time.
Change the deployment configuration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time.
Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected.
Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected.
Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy.
Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy.
Use the appspec.yml file to run a script on the AllowTraffic hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass.
Use the appspec.yml file to run a script on the AllowTraffic hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass.
Use the appspec,yml file to run a script on the BeforeAllowTraffic hook to perform hearth checks on the application and fail the deployment if the health checks performed by the script are not successful.
Use the appspec,yml file to run a script on the BeforeAllowTraffic hook to perform hearth checks on the application and fail the deployment if the health checks performed by the script are not successful.
Suggested answer: A, C
asked 16/09/2024
Tyler Henderson
35 questions

Question 222

Report
Export
Collapse

A government agency is storing highly confidential files in an encrypted Amazon S3 bucket. The agency has configured federated access and has allowed only a particular on-premises Active Directory user group to access this bucket. The agency wants to maintain audit records and automatically detect and revert any accidental changes administrators make to the IAM policies used for providing this restricted federated access. Which of the following options provide the FASTEST way to meet these requirements?

Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change.
Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change.
Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change.
Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change.
Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes.
Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes.
Restrict administrators in the on-premises Active Directory from changing the IAM policies.
Restrict administrators in the on-premises Active Directory from changing the IAM policies.
Suggested answer: B
asked 16/09/2024
Abheesh Vijayan
24 questions

Question 223

Report
Export
Collapse

A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the occurrence.

Which solution will meet these requirements?

Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
Suggested answer: B
asked 16/09/2024
Faviola Gomez Carbajal
25 questions

Question 224

Report
Export
Collapse

What is the purpose of a Docker swarm worker node?

scheduling services
scheduling services
service swarm node HTTP API endpoints
service swarm node HTTP API endpoints
executing containers
executing containers
maintaining cluster state
maintaining cluster state
Suggested answer: C

Explanation:

Manager nodes handle cluster management tasks: maintaining cluster state scheduling services serving swarm mode HTTP API endpoints Worker nodes Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers. Worker nodes don't participate in the Raft distributed state, make scheduling decisions, or serve the swarm mode HTTP API.

Reference: https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/#worker-nodes

asked 16/09/2024
MANIVANNAN POOPALASINGHAM
31 questions

Question 225

Report
Export
Collapse

What is AWS CloudTrail Processing Library?

A static library with CloudTrail log files in a movable format machine code that is directly executable
A static library with CloudTrail log files in a movable format machine code that is directly executable
An object library with CloudTrail log files in a movable format machine code that is usually not directly executable
An object library with CloudTrail log files in a movable format machine code that is usually not directly executable
A Java library that makes it easy to build an application that reads and processes CloudTrail log files
A Java library that makes it easy to build an application that reads and processes CloudTrail log files
A PHP library that renders various generic containers needed for CloudTrail log files
A PHP library that renders various generic containers needed for CloudTrail log files
Suggested answer: C

Explanation:

AWS CloudTrail Processing Library is a Java library that makes it easy to build an application that reads and processes CloudTrail log files. You can download CloudTrail Processing Library from GitHub.

Reference: http://aws.amazon.com/cloudtrail/faqs/

asked 16/09/2024
Lucia Montero Tejeda
37 questions

Question 226

Report
Export
Collapse

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance. When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region. How should the company meet these requirements with the LEAST amount of application changes?

Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Suggested answer: C
asked 16/09/2024
Hendrik van Bemmel
32 questions

Question 227

Report
Export
Collapse

You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?

Model an AWS EMR job in AWS Elastic Beanstalk.
Model an AWS EMR job in AWS Elastic Beanstalk.
Model an AWS EMR job in AWS CloudFormation.
Model an AWS EMR job in AWS CloudFormation.
Model an AWS EMR job in AWS OpsWorks.
Model an AWS EMR job in AWS OpsWorks.
Model an AWS EMR job in AWS CLI Composer.
Model an AWS EMR job in AWS CLI Composer.
Suggested answer: B

Explanation:

To declaratively model build and destroy of a cluster, you need to use AWS CloudFormation. OpsWorks and Elastic Beanstalk cannot directly model EMR Clusters. The CLI is not declarative, and CLI Composer does not exist.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-emrcluster.html

asked 16/09/2024
JASON HOLT
34 questions

Question 228

Report
Export
Collapse

You are responsible for a large-scale video transcoding system that operates with an Auto Scaling group of video transcoding workers. The Auto Scaling group is configured with a minimum of 750 Amazon EC2 instances and a maximum of 1000 Amazon EC2 instances. You are using Amazon SQS to pass a message containing the URI for a video stored in Amazon S3 to the transcoding workers. An Amazon CloudWatch alarm has notified you that the queue depth is becoming very large. How can you resolve the alarm without the risk of increasing the time to transcode videos? (Choose two.)

Create a second queue in Amazon SQS.
Create a second queue in Amazon SQS.
Adjust the Amazon CloudWatch alarms for a higher queue depth.
Adjust the Amazon CloudWatch alarms for a higher queue depth.
Create a new Auto Scaling group with a launch configuration that has a larger Amazon EC2 instance type.
Create a new Auto Scaling group with a launch configuration that has a larger Amazon EC2 instance type.
Add an additional Availability Zone to the Auto Scaling group configuration.
Add an additional Availability Zone to the Auto Scaling group configuration.
Change the Amazon CloudWatch alarm so that it monitors the CPU utilization of the Amazon EC2 instances rather than the Amazon SQS queue depth.
Change the Amazon CloudWatch alarm so that it monitors the CPU utilization of the Amazon EC2 instances rather than the Amazon SQS queue depth.
Adjust the Auto Scaling group configuration to increase the maximum number of Amazon EC2 instances.
Adjust the Auto Scaling group configuration to increase the maximum number of Amazon EC2 instances.
Suggested answer: C, F
asked 16/09/2024
Jason Wang
40 questions

Question 229

Report
Export
Collapse

A company runs an application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones in us-east-1. The application stores data in an Amazon RDS MySQL Multi-AZ DB instance.

A DevOps engineer wants to modify the current solution and create a hot standby of the environment in another region to minimize downtime if a problem occurs in us-east-1. Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)

Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.
Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.
Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.
Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.
Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.
Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.
Enable multi-region failover for the RDS configuration for the database instance.
Enable multi-region failover for the RDS configuration for the database instance.
Deploy a read replica of the RDS instance in the disaster recovery region.
Deploy a read replica of the RDS instance in the disaster recovery region.
Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.
Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.
Suggested answer: A, B, E
asked 16/09/2024
Robert Pila
39 questions

Question 230

Report
Export
Collapse

A developer has written an application that writes data to Amazon DynamoDB. The DynamoDB table has been configured to use conditional writes. During peak usage times, writes are failing due to a ConditionalCheckFailedException error.

How can the developer increase the application’s reliability when multiple clients are attempting to write to the same record?

Write the data to an Amazon SNS topic.
Write the data to an Amazon SNS topic.
Increase the amount of write capacity for the table to anticipate short-term spikes or bursts in write operations.
Increase the amount of write capacity for the table to anticipate short-term spikes or bursts in write operations.
Implement a caching solution, such as DynamoDB Accelerator or Amazon ElastiCache.
Implement a caching solution, such as DynamoDB Accelerator or Amazon ElastiCache.
Implement error retries and exponential backoff with jitter.
Implement error retries and exponential backoff with jitter.
Suggested answer: D

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-table-throttled/

asked 16/09/2024
Peter Unterasinger
42 questions
Total 557 questions
Go to page: of 56
Search

Related questions