ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 52

Question list
Search
Search

List of questions

Search

Related questions











A company wants to implement a CI/CD pipeline for an application that is deployed on AWS. The company also has a source-code analysis tool hosted on premises that checks for security flaws. The tool has not yet been migrated to AWS and can be accessed only on-premises server. The company wants to run checks against the source code as part of the pipeline before the code is compiled. The checks take anywhere from minutes to an hour to complete. How can a DevOps Engineer meet these requirements?

A.
Use AWS CodePipeline to create a pipeline. Add an action to the pipeline to invoke an AWS Lambda function after the source stage. Have the Lambda function invoke the source-code analysis tool on premises against the source input from CodePipeline. The function then waits for the execution to complete and places the output in a specified Amazon S3 location.
A.
Use AWS CodePipeline to create a pipeline. Add an action to the pipeline to invoke an AWS Lambda function after the source stage. Have the Lambda function invoke the source-code analysis tool on premises against the source input from CodePipeline. The function then waits for the execution to complete and places the output in a specified Amazon S3 location.
Answers
B.
Use AWS CodePipeline to create a pipeline, then create a custom action type. Create a job worker for the on-premises server that polls CodePipeline for job requests, initiates the tests, and returns the results. Configure the pipeline to invoke the custom action after the source stage.
B.
Use AWS CodePipeline to create a pipeline, then create a custom action type. Create a job worker for the on-premises server that polls CodePipeline for job requests, initiates the tests, and returns the results. Configure the pipeline to invoke the custom action after the source stage.
Answers
C.
Use AWS CodePipeline to create a pipeline. Add a step after the source stage to make an HTTPS request to the onpremiseshosted web service that invokes a test with the source code analysis tool. When the analysis is complete, the webservice sends the results back by putting the results in an Amazon S3 output location provided by CodePipeline.
C.
Use AWS CodePipeline to create a pipeline. Add a step after the source stage to make an HTTPS request to the onpremiseshosted web service that invokes a test with the source code analysis tool. When the analysis is complete, the webservice sends the results back by putting the results in an Amazon S3 output location provided by CodePipeline.
Answers
D.
Use AWS CodePipeline to create a pipeline. Create a shell script that copies the input source code to a location on premises. Invoke the source code analysis tool and return the results to CodePipeline. Invoke the shell script by adding a custom script action after the source stage.
D.
Use AWS CodePipeline to create a pipeline. Create a shell script that copies the input source code to a location on premises. Invoke the source code analysis tool and return the results to CodePipeline. Invoke the shell script by adding a custom script action after the source stage.
Answers
Suggested answer: B

A company is using AWS to deploy an application. The development team must automate the deployments. The team has created an AWS CodePipeline pipeline to deploy the application to Amazon EC2 instances using AWS CodeDeploy after it has been built using AWS CodeBuild.

The team wants to add automated testing to the pipeline to confirm that the application is healthy before deploying the code to the EC2 instances. The team also requires a manual approval action before the application is deployed, even if the tests are successful. The testing and approval must be accomplished at the lowest costs, using the simplest management solution. Which solution will meet these requirements?

A.
Create a manual approval action after the build action of the pipeline. Use Amazon SNS to inform the team of the stage being triggered. Next, add a test action using CodeBuild to perform the required tests. At the end of the pipeline, add a deploy action to deploy the application to the next stage.
A.
Create a manual approval action after the build action of the pipeline. Use Amazon SNS to inform the team of the stage being triggered. Next, add a test action using CodeBuild to perform the required tests. At the end of the pipeline, add a deploy action to deploy the application to the next stage.
Answers
B.
Create a test action after the CodeBuild build of the pipeline. Configure the action to use CodeBuild to perform the required tests. If these tests are successful, mark the action as successful. Add a manual approval action that uses Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
B.
Create a test action after the CodeBuild build of the pipeline. Configure the action to use CodeBuild to perform the required tests. If these tests are successful, mark the action as successful. Add a manual approval action that uses Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
Answers
C.
Create a new pipeline that uses a source action that gets the code from the same repository as the first pipeline. Add a deploy action to deploy the code to a test environment. Use a test action using AWS Lambda to test the deployment. Add a manual approval action by using Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
C.
Create a new pipeline that uses a source action that gets the code from the same repository as the first pipeline. Add a deploy action to deploy the code to a test environment. Use a test action using AWS Lambda to test the deployment. Add a manual approval action by using Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
Answers
D.
Create a test action after the build action. Use a Jenkins server on Amazon EC2 to perform the required tests and mark the action as successful if the tests pass. Create a manual approval action that uses Amazon SQS to notify the team and add a deploy action to deploy the application to the next stage.
D.
Create a test action after the build action. Use a Jenkins server on Amazon EC2 to perform the required tests and mark the action as successful if the tests pass. Create a manual approval action that uses Amazon SQS to notify the team and add a deploy action to deploy the application to the next stage.
Answers
Suggested answer: B

An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each deployment. This process can take several minutes and the web tier cannot come online until the process is complete. Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job. What should be done to ensure that the web tier does not come online before the database is completely configured?

A.
Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct data.
A.
Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct data.
Answers
B.
Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the table to be populated.
B.
Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the table to be populated.
Answers
C.
Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before continuing with the deployment.
C.
Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before continuing with the deployment.
Answers
D.
Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
D.
Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
Answers
Suggested answer: D

You are using AWS Elastic Beanstalk to deploy your application and must make data stored on an Amazon Elastic Block Store (EBS) volume snapshot available to the Amazon Elastic Compute Cloud (EC2) instances. How can you modify your Elastic Beanstalk environment so that the data is added to the Amazon EC2 instances every time you deploy your application?

A.
Add commands to a configuration file in the .ebextensions folder of your deployable archive that mount an additional Amazon EBS volume on launch. Also add a "BlockDeviceMappings" option, and specify the snapshot to use for the block device in the Auto Scaling launch configuration.
A.
Add commands to a configuration file in the .ebextensions folder of your deployable archive that mount an additional Amazon EBS volume on launch. Also add a "BlockDeviceMappings" option, and specify the snapshot to use for the block device in the Auto Scaling launch configuration.
Answers
B.
Add commands to a configuration file in the .ebextensions folder of your deployable archive that uses the create-volume Amazon EC2 API or CLI to create a new ephemeral volume based on the specified snapshot and then mounts the volume on launch.
B.
Add commands to a configuration file in the .ebextensions folder of your deployable archive that uses the create-volume Amazon EC2 API or CLI to create a new ephemeral volume based on the specified snapshot and then mounts the volume on launch.
Answers
C.
Add commands to the Amazon EC2 user data that will be executed by eb-init, which uses the create- volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mounts the volume on launch.
C.
Add commands to the Amazon EC2 user data that will be executed by eb-init, which uses the create- volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mounts the volume on launch.
Answers
D.
Add commands to the Chef recipe associated with your environment, use the create-volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mount the volume on launch.
D.
Add commands to the Chef recipe associated with your environment, use the create-volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mount the volume on launch.
Answers
Suggested answer: A


A DevOps Engineer is deploying a new web application. The company chooses AWS Elastic Beanstalk for deploying and managing the web application, and Amazon RDS MySQL to handle persistent data. The company requires that new deployments have minimal impact if they fail. The application resources must be at full capacity during deployment, and rolling back a deployment must also be possible. Which deployment sequence will meet these requirements?

A.
Deploy the application using Elastic Beanstalk and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk features for a blue/green deployment to deploy the new release to a separate environment, and then swap the CNAME in the two environments to redirect traffic to the new version.
A.
Deploy the application using Elastic Beanstalk and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk features for a blue/green deployment to deploy the new release to a separate environment, and then swap the CNAME in the two environments to redirect traffic to the new version.
Answers
B.
Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use default Elastic Beanstalk behavior to deploy changes to the application, and let rolling updates deploy changes to the application.
B.
Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use default Elastic Beanstalk behavior to deploy changes to the application, and let rolling updates deploy changes to the application.
Answers
C.
Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use Elastic Beanstalk immutable updates for application deployments.
C.
Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use Elastic Beanstalk immutable updates for application deployments.
Answers
D.
Deploy the application using Elastic Beanstalk, and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk immutable updates for application deployments.
D.
Deploy the application using Elastic Beanstalk, and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk immutable updates for application deployments.
Answers
Suggested answer: D

A retail company wants to use AWS Elastic Beanstalk to host its online sales website running on Java. Since this will be the production website, the CTO has the following requirements for the deployment strategy:

Zero downtime. While the deployment is ongoing, the current Amazon EC2 instances in service should remain in service. No deployment or any other action should be performed on the EC2 instances because they serve production traffic. A new fleet of instances should be provisioned for deploying the new application version.

Once the new application version is deployed successfully in the new fleet of instances, the new instances should be placed in service and the old ones should be removed. The rollback should be as easy as possible. If the new fleet of instances fail to deploy the new application version, they should be terminated and the current instances should continue serving traffic as normal. The resources within the environment (EC2 Auto Scaling group, Elastic Load Balancing, Elastic Beanstalk DNS CNAME) should remain the same and no DNS change should be made. Which deployment strategy will meet the requirements?

A.
Use rolling deployments with a fixed amount of one instance at a time and set the healthy threshold to OK.
A.
Use rolling deployments with a fixed amount of one instance at a time and set the healthy threshold to OK.
Answers
B.
Use rolling deployments with additional batch with a fixed amount of one instance at a time and set the healthy threshold to OK.
B.
Use rolling deployments with additional batch with a fixed amount of one instance at a time and set the healthy threshold to OK.
Answers
C.
launch a new environment and deploy the new application version there, then perform a CNAME swap between environments.
C.
launch a new environment and deploy the new application version there, then perform a CNAME swap between environments.
Answers
D.
Use immutable environment updates to meet all the necessary requirements.
D.
Use immutable environment updates to meet all the necessary requirements.
Answers
Suggested answer: D

You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit. Which of these is a violation of this policy?

A.
ELB SSL termination.
A.
ELB SSL termination.
Answers
B.
ELB Using Proxy Protocol v1.
B.
ELB Using Proxy Protocol v1.
Answers
C.
CloudFront Viewer Protocol Policy set to HTTPS redirection.
C.
CloudFront Viewer Protocol Policy set to HTTPS redirection.
Answers
D.
Telling S3 to use AES256 on the server-side.
D.
Telling S3 to use AES256 on the server-side.
Answers
Suggested answer: A

Explanation:

Terminating SSL terminates the security of a connection over HTTP, removing the S for "Secure" in HTTPS. This violatesthe "encryption in transit" requirement in the scenario.

Reference: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.htm

A DevOps engineer is scheduling legacy AWS KMS keys for deletion and has created a remediation AWS Lambda function that will re-enable a key if necessary. The engineer wants to automate this process with available AWS CloudTrail data so, if a key scheduled for deletion is in use, it will be re-enabled.

Which solution enables this automation?

A.
Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Set the remediation Lambda function as the target of the alarm.
A.
Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Set the remediation Lambda function as the target of the alarm.
Answers
B.
Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Create an Amazon SNS topic as the target of the alarm. Subscribe the remediation Lambda function to the SNS topic.
B.
Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Create an Amazon SNS topic as the target of the alarm. Subscribe the remediation Lambda function to the SNS topic.
Answers
C.
Create an Amazon CloudWatch Events rule pattern looking for KMS service events with an error message. Create an Amazon SNS topic as the target of the rule. Subscribe the remediation Lambda function to the SNS topic.
C.
Create an Amazon CloudWatch Events rule pattern looking for KMS service events with an error message. Create an Amazon SNS topic as the target of the rule. Subscribe the remediation Lambda function to the SNS topic.
Answers
D.
Use Amazon CloudTrail to alert for KMS service events with an error message. Set the remediation Lambda function as the target of the rule.
D.
Use Amazon CloudTrail to alert for KMS service events with an error message. Set the remediation Lambda function as the target of the rule.
Answers
Suggested answer: A

What is the main difference between calling the commands ‘ansible’ and ‘ansible-playbook’ on the command line?

A.
‘ansible’ is for setting configuration and environment variables which ‘ansible-playbook’ will use when running plays.
A.
‘ansible’ is for setting configuration and environment variables which ‘ansible-playbook’ will use when running plays.
Answers
B.
‘ansible-playbook’ is for running entire Playbooks while ‘ansible’ is for calling ad-hoc commands.
B.
‘ansible-playbook’ is for running entire Playbooks while ‘ansible’ is for calling ad-hoc commands.
Answers
C.
‘ansible-playbook’ runs the playbooks by using the ‘ansible’ command to run the individual plays
C.
‘ansible-playbook’ runs the playbooks by using the ‘ansible’ command to run the individual plays
Answers
D.
‘ansible’ is for running individual plays and ‘ansible-playbook’ is for running the entire playbook.
D.
‘ansible’ is for running individual plays and ‘ansible-playbook’ is for running the entire playbook.
Answers
Suggested answer: B

Explanation:

The ‘ansible’ command is for running Ansible ad-hoc commands remotely via SSH. ‘ansibleplaybook’ is for running Ansible Playbook projects.

Reference: http://docs.ansible.com/ansible/intro_adhoc.html

You need to implement A/B deployments for several multi-tier web applications. Each of them has its Individual infrastructure: Amazon Elastic Compute Cloud (EC2) front-end servers, Amazon ElastiCache clusters, Amazon Simple Queue Service (SQS) queues, and Amazon Relational Database (RDS) Instances. Which combination of services would give you the ability to control traffic between different deployed versions of your application?

A.
Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed a-eating Elastic Beanstalk environments and using the Swap URLs feature.
A.
Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed a-eating Elastic Beanstalk environments and using the Swap URLs feature.
Answers
B.
Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route53.
B.
Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route53.
Answers
C.
Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53.
C.
Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53.
Answers
D.
Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed updating the Elastic Beanstalk application version for the current Elastic Beanstalk environment.
D.
Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed updating the Elastic Beanstalk application version for the current Elastic Beanstalk environment.
Answers
Suggested answer: B
Total 557 questions
Go to page: of 56