ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











A company wants to use Amazon ECS to provide a Docker container runtime environment. For compliance reasons, all Amazon EBS volumes used in the ECS cluster must be encrypted. Rolling updates will be made to the cluster instances and the company wants the instances drained of all tasks before being terminated.

How can these requirements be met? (Choose two.)

A.
Modify the default ECS AMI user data to create a script that executes docker rm –f {id} for all running container instances. Copy the script to the /etc/init.d/rc.d directory and execute chconfig enabling the script to run during operating system shutdown.
A.
Modify the default ECS AMI user data to create a script that executes docker rm –f {id} for all running container instances. Copy the script to the /etc/init.d/rc.d directory and execute chconfig enabling the script to run during operating system shutdown.
Answers
B.
Use AWS CodePipeline to build a pipeline that discovers the latest Amazon-provided ECS AMI, then copies the image to an encrypted AMI outputting the encrypted AMI ID. Use the encrypted AMI ID when deploying the cluster.
B.
Use AWS CodePipeline to build a pipeline that discovers the latest Amazon-provided ECS AMI, then copies the image to an encrypted AMI outputting the encrypted AMI ID. Use the encrypted AMI ID when deploying the cluster.
Answers
C.
Copy the default AWS CloudFormation template that ECS uses to deploy cluster instances. Modify the template resource EBS configuration setting to set ‘Encrypted: True’ and include the AWS KMS alias: ‘aws/ebs’ to encrypt the AMI.
C.
Copy the default AWS CloudFormation template that ECS uses to deploy cluster instances. Modify the template resource EBS configuration setting to set ‘Encrypted: True’ and include the AWS KMS alias: ‘aws/ebs’ to encrypt the AMI.
Answers
D.
Create an Auto Scaling lifecycle hook backed by an AWS Lambda function that uses the AWS SDK to mark a terminating instance as DRAINING. Prevent the lifecycle hook from completing until the running tasks on the instance are zero.
D.
Create an Auto Scaling lifecycle hook backed by an AWS Lambda function that uses the AWS SDK to mark a terminating instance as DRAINING. Prevent the lifecycle hook from completing until the running tasks on the instance are zero.
Answers
E.
Create an IAM role that allows the action ECS::EncryptedImage. Configure the AWS CLI and a profile to use this role. Start the cluster using the AWS CLI providing the --use-encrypted-image and --kms-key arguments to the create-cluster ECS command.
E.
Create an IAM role that allows the action ECS::EncryptedImage. Configure the AWS CLI and a profile to use this role. Start the cluster using the AWS CLI providing the --use-encrypted-image and --kms-key arguments to the create-cluster ECS command.
Answers
Suggested answer: C, D

A company is using AWS CodePipeline to deploy an application. A recent policy change requires that a member of the company's security team sign off on any application changes before they are deployed into production. The approval should be recorded and retained. Which combination of actions will meet these new requirements? (Choose two.)

A.
Configure CodePipeline with Amazon CloudWatch Logs to retain data.
A.
Configure CodePipeline with Amazon CloudWatch Logs to retain data.
Answers
B.
Configure CodePipeline to deliver action logs to Amazon S3.
B.
Configure CodePipeline to deliver action logs to Amazon S3.
Answers
C.
Create an AWS CloudTrail trail to deliver logs to Amazon S3.
C.
Create an AWS CloudTrail trail to deliver logs to Amazon S3.
Answers
D.
Create a custom CodePipeline action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage custom CodePipeline actions.
D.
Create a custom CodePipeline action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage custom CodePipeline actions.
Answers
E.
Create a manual approval CodePipeline action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
E.
Create a manual approval CodePipeline action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
Answers
Suggested answer: C, E

You have an ELB setup in AWS with EC2 instances running behind it. You have been requested to monitor the incoming connections to the ELB. Which of the below options can suffice this requirement?

A.
Use AWSCIoudTrail with your load balancer
A.
Use AWSCIoudTrail with your load balancer
Answers
B.
Enable access logs on the load balancer
B.
Enable access logs on the load balancer
Answers
C.
Use a CloudWatch Logs Agent
C.
Use a CloudWatch Logs Agent
Answers
D.
Create a custom metric CloudWatch filter on your load balancer
D.
Create a custom metric CloudWatch filter on your load balancer
Answers
Suggested answer: B

Explanation:

Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Option A is invalid because this service will monitor all AWS services Option C and D are invalid since CLB already provides a logging feature.

You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?

A.
Route53 Health Checks
A.
Route53 Health Checks
Answers
B.
CloudWatch Health Checks
B.
CloudWatch Health Checks
Answers
C.
AWS ELB Health Checks
C.
AWS ELB Health Checks
Answers
D.
EC2 Health Checks
D.
EC2 Health Checks
Answers
Suggested answer: A

Explanation:

You can create a health check that will run into perpetuity using Route53, in one API call, which will ping your service via HTTP every 10 or 30 seconds. Amazon Route 53 must be able to establish a TCP connection with the endpoint within four seconds. In addition, the endpoint must respond with an HTTP status code of 200 or greater and less than 400 within two seconds after connecting.

Reference: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-ofendpoints.html

A company is using several AWS CloudFormation templates for deploying infrastructure as code. In most of the deployments, the company uses Amazon EC2 Auto Scaling groups. A DevOps Engineer needs to update the AMIs for the Auto Scaling group in the template if newer AMIs are available. How can these requirements be met?

A.
Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
A.
Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
Answers
B.
Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID. Reference the returned AMI ID in the launch configuration resource block.
B.
Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID. Reference the returned AMI ID in the launch configuration resource block.
Answers
C.
Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs. Reference the returned AMI ID in the launch configuration resource block.
C.
Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs. Reference the returned AMI ID in the launch configuration resource block.
Answers
D.
Launch an Amazon EC2 m4.small instance and run a script on it to check for new AMIs. If new AMIs are available, the script should update the launch configuration resource block with the new AMI ID.
D.
Launch an Amazon EC2 m4.small instance and run a script on it to check for new AMIs. If new AMIs are available, the script should update the launch configuration resource block with the new AMI ID.
Answers
Suggested answer: D

Explanation:

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-customresources-lambda-lookupamiids.html

Which EBS volume type is best for high performance NoSQL cluster deployments?

A.
io1
A.
io1
Answers
B.
gp1
B.
gp1
Answers
C.
standard
C.
standard
Answers
D.
gp2
D.
gp2
Answers
Suggested answer: A

Explanation:

Explanation: io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

A company is using AWS Organizations and wants to implement a governance strategy with the following requirements:

AWS resource access is restricted to the same two Regions for all accounts.

AWS services are limited to a specific group of authorized services for all accounts.

Authentication is provided by Active Directory.

Access permissions are organized by job function and are identical in each account.

Which solution will meet these requirements?

A.
Establish an organizational unit (OU) with group policies in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSets to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
A.
Establish an organizational unit (OU) with group policies in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSets to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Answers
B.
Establish a permission boundary in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
B.
Establish a permission boundary in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Answers
C.
Establish a service control in the master account to restrict Regions and authorized services. Use AWS Resource Access Manager to share master account roles with permissions for each job function, including AWS SSO for authentication in each account.
C.
Establish a service control in the master account to restrict Regions and authorized services. Use AWS Resource Access Manager to share master account roles with permissions for each job function, including AWS SSO for authentication in each account.
Answers
D.
Establish a service control in the master account to restrict Regions and authorized services. Use CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
D.
Establish a service control in the master account to restrict Regions and authorized services. Use CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Answers
Suggested answer: A

When running a playbook on a remote target host you receive a Python error similar to "[Errno 13] Permission denied:

`/home/nick/.ansible/tmp'. What would be the most likely cause of this problem?

A.
The user's home or `.ansible' directory on the Ansible system is not writeable by the user running the play.
A.
The user's home or `.ansible' directory on the Ansible system is not writeable by the user running the play.
Answers
B.
The specified user does not exist on the remote system.
B.
The specified user does not exist on the remote system.
Answers
C.
The user running `ansible-playbook' must run it from their own home directory.
C.
The user running `ansible-playbook' must run it from their own home directory.
Answers
D.
The user's home or `.ansible' directory on the Ansible remote host is not writeable by the user running the play
D.
The user's home or `.ansible' directory on the Ansible remote host is not writeable by the user running the play
Answers
Suggested answer: D

Explanation:

Each task that Ansible runs calls a module. When Ansible uses modules, it copies the module to the remote target system. In the error above it attempted to copy it to the remote user's home directory and found that either the home directory or the `.ansible' directory were not writeable and thus could not continue.

Reference: http://docs.ansible.com/ansible/modules_intro.html

Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks?

A.
AWS Config
A.
AWS Config
Answers
B.
Amazon CloudWatch Metrics
B.
Amazon CloudWatch Metrics
Answers
C.
AWS CloudTrail
C.
AWS CloudTrail
Answers
D.
Amazon CloudWatch Logs
D.
Amazon CloudWatch Logs
Answers
Suggested answer: A

Explanation:

You can monitor your stacks in the following ways: AWS OpsWorks uses Amazon CloudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack; AWS OpsWorks integrates with AWS CloudTrail to log every AWS OpsWorks API call and store the data in an Amazon S3 bucket; You can use Amazon CloudWatch Logs to monitor your stack's system, application, and custom logs.

Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/monitoring.html

A developer tested an application locally and then deployed it to AWS Lambda. While testing the application remotely, the Lambda function fails with an access denied message. How can this issue be addressed?

A.
Update the Lambda function’s execution role to include the missing permissions.
A.
Update the Lambda function’s execution role to include the missing permissions.
Answers
B.
Update the Lambda function’s resource policy to include the missing permissions.
B.
Update the Lambda function’s resource policy to include the missing permissions.
Answers
C.
Include an IAM policy document at the root of the deployment package and redeploy the Lambda function.
C.
Include an IAM policy document at the root of the deployment package and redeploy the Lambda function.
Answers
D.
Redeploy the Lambda function using an account with access to the AdministratorAccess policy.
D.
Redeploy the Lambda function using an account with access to the AdministratorAccess policy.
Answers
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/

Total 557 questions
Go to page: of 56