ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 16

Question list
Search
Search

List of questions

Search

Related questions











A DevOps Engineer must ensure all IAM entity configurations across multiple AWS accounts in AWS Organizations are compliant with corporate IAM policies. Which combination of steps will accomplish this? (Choose two.)

A.
Enable AWS Trusted Advisor in Organizations for all accounts to report on noncompliant IAM entities.
A.
Enable AWS Trusted Advisor in Organizations for all accounts to report on noncompliant IAM entities.
Answers
B.
Configure an AWS Config aggregator in the Organizations master account for all accounts.
B.
Configure an AWS Config aggregator in the Organizations master account for all accounts.
Answers
C.
Deploy AWS Config rules to the master account in Organizations that match corporate IAM policies.
C.
Deploy AWS Config rules to the master account in Organizations that match corporate IAM policies.
Answers
D.
Apply an SCP in Organizations to ensure compliance of IAM entities.
D.
Apply an SCP in Organizations to ensure compliance of IAM entities.
Answers
E.
Deploy AWS Config rules to all accounts in Organizations that match the corporate IAM policies.
E.
Deploy AWS Config rules to all accounts in Organizations that match the corporate IAM policies.
Answers
Suggested answer: D, E

Explanation:

Reference: https://aws.amazon.com/blogs/mt/manage-custom-aws-config-rules-with-remediations-using-conformancepacks/?nc1=b_rp https://aws.amazon.com/blogs/security/announcing-aws-organizations-centrally-manage-multiple- awsaccounts/

A DevOps Engineer is working on a project that is hosted on Amazon Linux and has failed a security review. The DevOps Manager has been asked to review the company buildspec.yaml file for an AWS CodeBuild project and provide recommendations. The buildspec.yaml file is configured as follows:

What changes should be recommended to comply with AWS security best practices? (Choose three.)

A.
Add a post-build command to remove the temporary files from the container before termination to ensure they cannot be seen by other CodeBuild users.
A.
Add a post-build command to remove the temporary files from the container before termination to ensure they cannot be seen by other CodeBuild users.
Answers
B.
Update the CodeBuild project role with the necessary permissions and then remove the AWS credentials from the environment variable.
B.
Update the CodeBuild project role with the necessary permissions and then remove the AWS credentials from the environment variable.
Answers
C.
Store the DB_PASSWORD as a SecureString value in AWS Systems Manager Parameter Store and then remove the DB_PASSWORD from the environment variables.
C.
Store the DB_PASSWORD as a SecureString value in AWS Systems Manager Parameter Store and then remove the DB_PASSWORD from the environment variables.
Answers
D.
Move the environment variables to the ‘db-deploy-bucket’ Amazon S3 bucket, add a prebuild stage to download, then export the variables.
D.
Move the environment variables to the ‘db-deploy-bucket’ Amazon S3 bucket, add a prebuild stage to download, then export the variables.
Answers
E.
Use AWS Systems Manager run command versus scp and ssh commands directly to the instance.
E.
Use AWS Systems Manager run command versus scp and ssh commands directly to the instance.
Answers
F.
Scramble the environment variables using XOR followed by Base64, add a section to install, and then run XOR and Base64 to the build phase.
F.
Scramble the environment variables using XOR followed by Base64, add a section to install, and then run XOR and Base64 to the build phase.
Answers
Suggested answer: B, C, E

A user is defining a policy for an IAM user. Which of the below mentioned options is a valid version defined for the policy?

A.
"Version":"2014-01-01"
A.
"Version":"2014-01-01"
Answers
B.
"Version":"2011-10-17"
B.
"Version":"2011-10-17"
Answers
C.
"Version":"2013-10-17"
C.
"Version":"2013-10-17"
Answers
D.
"Version":"2012-10-17"
D.
"Version":"2012-10-17"
Answers
Suggested answer: D

Explanation:

When defining an IAM Policy, the version element specifies the policy language version. Only the following values are allowed:

2012-10-17. This is the current version of the policy language, and the user should use this version number for all the policies. 2008-10-17. This was an earlier version of the policy language. The user might see this version on the existing policies. Do not use this version for any new policies or any existing policies that are being updated. If a version element is not included, the value defaults to 2008-10-17.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

A company is using AWS CodeDeploy to automate software deployment. The deployment must meet these requirements:

A number of instances must be available to serve traffic during the deployment. Traffic must be balanced across those instances, and the instances must automatically heal in the event of failure. A new fleet of instances must be launched for deploying a new revision automatically, with no manual provisioning. Traffic must be rerouted to the new environment to half of the new instances at a time. The deployment should succeed if traffic is rerouted to at least half of the instances; otherwise, it should fail. Before routing traffic to the new fleet of instances, the temporary files generated during the deployment process must be deleted.

At the end of a successful deployment, the original instances in the deployment group must be deleted immediately to reduce costs. How can a DevOps Engineer meet these requirements?

A.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
A.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
Answers
B.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%, and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlockTraffic hook within appspec.yml to delete the temporary files.
B.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%, and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlockTraffic hook within appspec.yml to delete the temporary files.
Answers
C.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.HalfAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.
C.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.HalfAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.
Answers
D.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.AllatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appspec.yml to delete the temporary files.
D.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.AllatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appspec.yml to delete the temporary files.
Answers
Suggested answer: C

Explanation:

Reference:

https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_BlueGreenDeploymentConfig uration.html

Your social media marketing application has a component written in Ruby running on AWS Elastic Beanstalk. This application component posts messages to social media sites in support of various marketing campaigns. Your management now requires you to record replies to these social media messages to analyze the effectiveness of the marketing campaign in comparison to past and future efforts. You have already developed a new application component to interface with the social media site APIs in order to read the replies.

Which process should you use to record the social media replies in a durable data store that can be accessed at any time for analysis of historical data?

A.
Deploy the new application component in an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) Instances, read the data from the social media sites, store it with Amazon Elastic Block Store, and use AWS Data Pipeline to publish it to Amazon Kinesis for analytics.
A.
Deploy the new application component in an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) Instances, read the data from the social media sites, store it with Amazon Elastic Block Store, and use AWS Data Pipeline to publish it to Amazon Kinesis for analytics.
Answers
B.
Deploy the new application component as an Elastic Beanstalk application, read the data from the social media sites, store it in Amazon DynamoDB, and use Apache Hive with Amazon Elastic MapReduce for analytics.
B.
Deploy the new application component as an Elastic Beanstalk application, read the data from the social media sites, store it in Amazon DynamoDB, and use Apache Hive with Amazon Elastic MapReduce for analytics.
Answers
C.
Deploy the new application component in an Auto Scaling group of Amazon EC2 instances, read the data from the social media sites, store it in Amazon Glacier, and use AWS Data Pipeline to publish it to Amazon Redshift for analytics.
C.
Deploy the new application component in an Auto Scaling group of Amazon EC2 instances, read the data from the social media sites, store it in Amazon Glacier, and use AWS Data Pipeline to publish it to Amazon Redshift for analytics.
Answers
D.
Deploy the new application component as an Amazon Elastic Beanstalk application, read the data from the social media site, store it with Amazon Elastic Block Store, and use Amazon Kinesis to stream the data to Amazon CloudWatch for analytics.
D.
Deploy the new application component as an Amazon Elastic Beanstalk application, read the data from the social media site, store it with Amazon Elastic Block Store, and use Amazon Kinesis to stream the data to Amazon CloudWatch for analytics.
Answers
Suggested answer: B

A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure.

Which solution will accomplish this?

A.
Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to trigger an AWS Lambda function that will promote the replica instance as the master.
A.
Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to trigger an AWS Lambda function that will promote the replica instance as the master.
Answers
B.
Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
B.
Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
Answers
C.
Create an AWS Lambda function to modify the application’s AWS Cloud Formation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to trigger this Lambda function after the failure event occurs.
C.
Create an AWS Lambda function to modify the application’s AWS Cloud Formation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to trigger this Lambda function after the failure event occurs.
Answers
D.
Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge (Amazon CloudWatch Events) event that defects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.
D.
Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge (Amazon CloudWatch Events) event that defects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.
Answers
Suggested answer: B

If Ansible encounters a resource that does not meet the requirements specified in the play it makes the necessary changes to the resource; however if the resource is already in the desired state Ansible will do nothing. This is an example of which methodology?

A.
Idempotency
A.
Idempotency
Answers
B.
Immutability
B.
Immutability
Answers
C.
Convergence
C.
Convergence
Answers
D.
Infrastructure as Code
D.
Infrastructure as Code
Answers
Suggested answer: A

Explanation:

Idempotency states that changes are only made if a resource does not meet the requirement specifications. If a change is made, it is made `in-place' and will not break existing resources.

Reference: http://docs.ansible.com/ansible/glossary.html

A global company with distributed Development teams built a web application using a microservices architecture running on Amazon ECS. Each application service is independent and runs as a service in the ECS cluster. The container build files and source code reside in a private GitHub source code repository. Separate ECS clusters exist for development, testing, and production environments. Developers are required to push features to branches in the GitHub repository and then merge the changes into an environment-specific branch (development, test, or production). This merge needs to trigger an automated pipeline to run a build and a deployment to the appropriate ECS cluster. What should the DevOps Engineer recommend as an automated solution to these requirements?

A.
Create an AWS CloudFormation stack for the ECS cluster and AWS CodePipeline services. Store the container build files in an Amazon S3 bucket. Use a post-commit hook to trigger a CloudFormation stack update that deploys the ECS cluster.Add a task in the ECS cluster to build and push images to Amazon ECR, based on the container build files in S3.
A.
Create an AWS CloudFormation stack for the ECS cluster and AWS CodePipeline services. Store the container build files in an Amazon S3 bucket. Use a post-commit hook to trigger a CloudFormation stack update that deploys the ECS cluster.Add a task in the ECS cluster to build and push images to Amazon ECR, based on the container build files in S3.
Answers
B.
Create a separate pipeline in AWS CodePipeline for each environment. Trigger each pipeline based on commits to the corresponding environment branch in GitHub. Add a build stage to launch AWS CodeBuild to create the container image from the build file and push it to Amazon ECR. Then add another stage to update the Amazon ECS task and service definitions in the appropriate cluster for that environment.
B.
Create a separate pipeline in AWS CodePipeline for each environment. Trigger each pipeline based on commits to the corresponding environment branch in GitHub. Add a build stage to launch AWS CodeBuild to create the container image from the build file and push it to Amazon ECR. Then add another stage to update the Amazon ECS task and service definitions in the appropriate cluster for that environment.
Answers
C.
Create a pipeline in AWS CodePipeline. Configure it to be triggered by commits to the master branch in GitHub. Add a stage to use the Git commit message to determine which environment the commit should be applied to, then call the createimage Amazon ECR command to build the image, passing it to the container build file. Then add a stage to update the ECS task and service definitions in the appropriate cluster for that environment.
C.
Create a pipeline in AWS CodePipeline. Configure it to be triggered by commits to the master branch in GitHub. Add a stage to use the Git commit message to determine which environment the commit should be applied to, then call the createimage Amazon ECR command to build the image, passing it to the container build file. Then add a stage to update the ECS task and service definitions in the appropriate cluster for that environment.
Answers
D.
Create a new repository in AWS CodeCommit. Configure a scheduled project in AWS CodeBuild to synchronize the GitHub repository to the new CodeCommit repository. Create a separate pipeline for each environment triggered by changes to the CodeCommit repository. Add a stage using AWS Lambda to build the container image and push to Amazon ECR. Then add another stage to update the ECS task and service definitions in the appropriate cluster for that environment.
D.
Create a new repository in AWS CodeCommit. Configure a scheduled project in AWS CodeBuild to synchronize the GitHub repository to the new CodeCommit repository. Create a separate pipeline for each environment triggered by changes to the CodeCommit repository. Add a stage using AWS Lambda to build the container image and push to Amazon ECR. Then add another stage to update the ECS task and service definitions in the appropriate cluster for that environment.
Answers
Suggested answer: B

An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function using reserved concurrency. The Lambda function processes order messages from an Amazon SQS queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has Auto Scaling enabled for read and write capacity. Which actions will diagnose and resolve the delay? (Choose two.)

A.
Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
A.
Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
Answers
B.
Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
B.
Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
Answers
C.
Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
C.
Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
Answers
D.
Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.
D.
Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.
Answers
E.
Check the Throttles metric for the Lambda function and increase the Lambda function timeout.
E.
Check the Throttles metric for the Lambda function and increase the Lambda function timeout.
Answers
Suggested answer: C, E

A web application with multiple services runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS Multi-AZ DB instance. The instance health check used by the load balancer returns PASS if at least one service is running on the instance.

The company uses AWS CodePipeline with AWS CodeBuild and AWS CodeDeploy steps to deploy code to test and production environments. Recently, a new version was unable to connect to the database server in the test environment. One process was running, so the health checks reported healthy and the application was promoted to production, causing a production outage. The company wants to ensure that test builds are fully functional before a promotion to production.

Which changes should a DevOps Engineer make to the test and deployment process? (Choose two.)

A.
Add an automated functional test to the pipeline that ensures solid test cases are performed.
A.
Add an automated functional test to the pipeline that ensures solid test cases are performed.
Answers
B.
Add a manual approval action to the CodeDeploy deployment pipeline that requires a Testing Engineer to validate the testing environment.
B.
Add a manual approval action to the CodeDeploy deployment pipeline that requires a Testing Engineer to validate the testing environment.
Answers
C.
Refactor the health check endpoint the Elastic Load Balancer is checking to better validate actual application functionality.
C.
Refactor the health check endpoint the Elastic Load Balancer is checking to better validate actual application functionality.
Answers
D.
Refactor the health check endpoint the Elastic Load Balancer is checking to return a text-based status result and configure the load balancer to check for a valid response.
D.
Refactor the health check endpoint the Elastic Load Balancer is checking to return a text-based status result and configure the load balancer to check for a valid response.
Answers
E.
Add a dependency checking step to the existing testing framework to ensure compatibility.
E.
Add a dependency checking step to the existing testing framework to ensure compatibility.
Answers
Suggested answer: B, C
Total 557 questions
Go to page: of 56