ExamGecko
Home / Amazon / DOP-C01 / List of questions
Ask Question

Amazon DOP-C01 Practice Test - Questions Answers, Page 17

List of questions

Question 161

Report
Export
Collapse

A company wants to use Amazon ECS to provide a Docker container runtime environment. For compliance reasons, all Amazon EBS volumes used in the ECS cluster must be encrypted. Rolling updates will be made to the cluster instances and the company wants the instances drained of all tasks before being terminated.

How can these requirements be met? (Choose two.)

Modify the default ECS AMI user data to create a script that executes docker rm –f {id} for all running container instances. Copy the script to the /etc/init.d/rc.d directory and execute chconfig enabling the script to run during operating system shutdown.
Modify the default ECS AMI user data to create a script that executes docker rm –f {id} for all running container instances. Copy the script to the /etc/init.d/rc.d directory and execute chconfig enabling the script to run during operating system shutdown.
Use AWS CodePipeline to build a pipeline that discovers the latest Amazon-provided ECS AMI, then copies the image to an encrypted AMI outputting the encrypted AMI ID. Use the encrypted AMI ID when deploying the cluster.
Use AWS CodePipeline to build a pipeline that discovers the latest Amazon-provided ECS AMI, then copies the image to an encrypted AMI outputting the encrypted AMI ID. Use the encrypted AMI ID when deploying the cluster.
Copy the default AWS CloudFormation template that ECS uses to deploy cluster instances. Modify the template resource EBS configuration setting to set ‘Encrypted: True’ and include the AWS KMS alias: ‘aws/ebs’ to encrypt the AMI.
Copy the default AWS CloudFormation template that ECS uses to deploy cluster instances. Modify the template resource EBS configuration setting to set ‘Encrypted: True’ and include the AWS KMS alias: ‘aws/ebs’ to encrypt the AMI.
Create an Auto Scaling lifecycle hook backed by an AWS Lambda function that uses the AWS SDK to mark a terminating instance as DRAINING. Prevent the lifecycle hook from completing until the running tasks on the instance are zero.
Create an Auto Scaling lifecycle hook backed by an AWS Lambda function that uses the AWS SDK to mark a terminating instance as DRAINING. Prevent the lifecycle hook from completing until the running tasks on the instance are zero.
Create an IAM role that allows the action ECS::EncryptedImage. Configure the AWS CLI and a profile to use this role. Start the cluster using the AWS CLI providing the --use-encrypted-image and --kms-key arguments to the create-cluster ECS command.
Create an IAM role that allows the action ECS::EncryptedImage. Configure the AWS CLI and a profile to use this role. Start the cluster using the AWS CLI providing the --use-encrypted-image and --kms-key arguments to the create-cluster ECS command.
Suggested answer: C, D
asked 16/09/2024
Dasaret Tillman
40 questions

Question 162

Report
Export
Collapse

A company is using AWS CodePipeline to deploy an application. A recent policy change requires that a member of the company's security team sign off on any application changes before they are deployed into production. The approval should be recorded and retained. Which combination of actions will meet these new requirements? (Choose two.)

Configure CodePipeline with Amazon CloudWatch Logs to retain data.
Configure CodePipeline with Amazon CloudWatch Logs to retain data.
Configure CodePipeline to deliver action logs to Amazon S3.
Configure CodePipeline to deliver action logs to Amazon S3.
Create an AWS CloudTrail trail to deliver logs to Amazon S3.
Create an AWS CloudTrail trail to deliver logs to Amazon S3.
Create a custom CodePipeline action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage custom CodePipeline actions.
Create a custom CodePipeline action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage custom CodePipeline actions.
Create a manual approval CodePipeline action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
Create a manual approval CodePipeline action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
Suggested answer: C, E
asked 16/09/2024
Carlotta Agape
39 questions

Question 163

Report
Export
Collapse

You have an ELB setup in AWS with EC2 instances running behind it. You have been requested to monitor the incoming connections to the ELB. Which of the below options can suffice this requirement?

Use AWSCIoudTrail with your load balancer
Use AWSCIoudTrail with your load balancer
Enable access logs on the load balancer
Enable access logs on the load balancer
Use a CloudWatch Logs Agent
Use a CloudWatch Logs Agent
Create a custom metric CloudWatch filter on your load balancer
Create a custom metric CloudWatch filter on your load balancer
Suggested answer: B

Explanation:

Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Option A is invalid because this service will monitor all AWS services Option C and D are invalid since CLB already provides a logging feature.

asked 16/09/2024
Anand Prakash
31 questions

Question 164

Report
Export
Collapse

You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?

Route53 Health Checks
Route53 Health Checks
CloudWatch Health Checks
CloudWatch Health Checks
AWS ELB Health Checks
AWS ELB Health Checks
EC2 Health Checks
EC2 Health Checks
Suggested answer: A

Explanation:

You can create a health check that will run into perpetuity using Route53, in one API call, which will ping your service via HTTP every 10 or 30 seconds. Amazon Route 53 must be able to establish a TCP connection with the endpoint within four seconds. In addition, the endpoint must respond with an HTTP status code of 200 or greater and less than 400 within two seconds after connecting.

Reference: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-ofendpoints.html

asked 16/09/2024
Raza Todorovac
44 questions

Question 165

Report
Export
Collapse

A company is using several AWS CloudFormation templates for deploying infrastructure as code. In most of the deployments, the company uses Amazon EC2 Auto Scaling groups. A DevOps Engineer needs to update the AMIs for the Auto Scaling group in the template if newer AMIs are available. How can these requirements be met?

Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID. Reference the returned AMI ID in the launch configuration resource block.
Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID. Reference the returned AMI ID in the launch configuration resource block.
Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs. Reference the returned AMI ID in the launch configuration resource block.
Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs. Reference the returned AMI ID in the launch configuration resource block.
Launch an Amazon EC2 m4.small instance and run a script on it to check for new AMIs. If new AMIs are available, the script should update the launch configuration resource block with the new AMI ID.
Launch an Amazon EC2 m4.small instance and run a script on it to check for new AMIs. If new AMIs are available, the script should update the launch configuration resource block with the new AMI ID.
Suggested answer: D

Explanation:

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-customresources-lambda-lookupamiids.html

asked 16/09/2024
Tim Klein
37 questions

Question 166

Report
Export
Collapse

Which EBS volume type is best for high performance NoSQL cluster deployments?

io1
io1
gp1
gp1
standard
standard
gp2
gp2
Suggested answer: A

Explanation:

Explanation: io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

asked 16/09/2024
Harry Meijer
44 questions

Question 167

Report
Export
Collapse

A company is using AWS Organizations and wants to implement a governance strategy with the following requirements:

AWS resource access is restricted to the same two Regions for all accounts.

AWS services are limited to a specific group of authorized services for all accounts.

Authentication is provided by Active Directory.

Access permissions are organized by job function and are identical in each account.

Which solution will meet these requirements?

Establish an organizational unit (OU) with group policies in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSets to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Establish an organizational unit (OU) with group policies in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSets to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Establish a permission boundary in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Establish a permission boundary in the master account to restrict Regions and authorized services. Use AWS CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Establish a service control in the master account to restrict Regions and authorized services. Use AWS Resource Access Manager to share master account roles with permissions for each job function, including AWS SSO for authentication in each account.
Establish a service control in the master account to restrict Regions and authorized services. Use AWS Resource Access Manager to share master account roles with permissions for each job function, including AWS SSO for authentication in each account.
Establish a service control in the master account to restrict Regions and authorized services. Use CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Establish a service control in the master account to restrict Regions and authorized services. Use CloudFormation StackSet to provision roles with permissions for each job function, including an IAM trust policy for IAM identity provider authentication in each account.
Suggested answer: A
asked 16/09/2024
Stefan Lundmark
44 questions

Question 168

Report
Export
Collapse

When running a playbook on a remote target host you receive a Python error similar to "[Errno 13] Permission denied:

`/home/nick/.ansible/tmp'. What would be the most likely cause of this problem?

The user's home or `.ansible' directory on the Ansible system is not writeable by the user running the play.
The user's home or `.ansible' directory on the Ansible system is not writeable by the user running the play.
The specified user does not exist on the remote system.
The specified user does not exist on the remote system.
The user running `ansible-playbook' must run it from their own home directory.
The user running `ansible-playbook' must run it from their own home directory.
The user's home or `.ansible' directory on the Ansible remote host is not writeable by the user running the play
The user's home or `.ansible' directory on the Ansible remote host is not writeable by the user running the play
Suggested answer: D

Explanation:

Each task that Ansible runs calls a module. When Ansible uses modules, it copies the module to the remote target system. In the error above it attempted to copy it to the remote user's home directory and found that either the home directory or the `.ansible' directory were not writeable and thus could not continue.

Reference: http://docs.ansible.com/ansible/modules_intro.html

asked 16/09/2024
Mahmoud Ziada
39 questions

Question 169

Report
Export
Collapse

Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks?

AWS Config
AWS Config
Amazon CloudWatch Metrics
Amazon CloudWatch Metrics
AWS CloudTrail
AWS CloudTrail
Amazon CloudWatch Logs
Amazon CloudWatch Logs
Suggested answer: A

Explanation:

You can monitor your stacks in the following ways: AWS OpsWorks uses Amazon CloudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack; AWS OpsWorks integrates with AWS CloudTrail to log every AWS OpsWorks API call and store the data in an Amazon S3 bucket; You can use Amazon CloudWatch Logs to monitor your stack's system, application, and custom logs.

Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/monitoring.html

asked 16/09/2024
Daniela Stojanovska
36 questions

Question 170

Report
Export
Collapse

A developer tested an application locally and then deployed it to AWS Lambda. While testing the application remotely, the Lambda function fails with an access denied message. How can this issue be addressed?

Update the Lambda function’s execution role to include the missing permissions.
Update the Lambda function’s execution role to include the missing permissions.
Update the Lambda function’s resource policy to include the missing permissions.
Update the Lambda function’s resource policy to include the missing permissions.
Include an IAM policy document at the root of the deployment package and redeploy the Lambda function.
Include an IAM policy document at the root of the deployment package and redeploy the Lambda function.
Redeploy the Lambda function using an account with access to the AdministratorAccess policy.
Redeploy the Lambda function using an account with access to the AdministratorAccess policy.
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/

asked 16/09/2024
Gale Morgan
45 questions
Total 557 questions
Go to page: of 56
Search

Related questions