ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











What is the maximum supported single-volume throughput on EBS?

A.
320MiB/s
A.
320MiB/s
Answers
B.
160MiB/s
B.
160MiB/s
Answers
C.
40MiB/s
C.
40MiB/s
Answers
D.
640MiB/s
D.
640MiB/s
Answers
Suggested answer: A

Explanation:

The ceiling throughput for PIOPS on EBS is 320MiB/s.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

You have decided that you need to change the instance type of your production instances which are running as part of an AutoScaling group. The entire architecture is deployed using CloudFormation Template. You currently have 4 instances in Production. You cannot have any interruption in service and need to ensure 2 instances are always runningduring the update. Which of the options below listed can be used for this?

A.
AutoScalingRollingUpdate
A.
AutoScalingRollingUpdate
Answers
B.
AutoScalingScheduledAction
B.
AutoScalingScheduledAction
Answers
C.
AutoScalingReplacingUpdate
C.
AutoScalingReplacingUpdate
Answers
D.
AutoScalinglntegrationUpdate
D.
AutoScalinglntegrationUpdate
Answers
Suggested answer: A

Explanation:

The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePoIicy attribute. This is used to define how an Auto Scalinggroup resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on Autoscaling updates, please refer to the below link.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

A company is building a solution for storing files containing Personally Identifiable Information (PII) on AWS. Requirements state:

All data must be encrypted at rest and in transit.

All data must be replicated in at least two locations that are at least 500 miles apart.

Which solution meets these requirements?

A.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSECon all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
A.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSECon all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Answers
B.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSES3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
B.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSES3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Answers
C.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3- Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
C.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3- Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Answers
D.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMSencryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMSCustomer Master Key (CMK) in the primary region for encrypting objects.
D.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMSencryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMSCustomer Master Key (CMK) in the primary region for encrypting objects.
Answers
Suggested answer: B

A DevOps Engineer must track the health of a stateless RESTful service sitting behind a Classic ILoad Balancer. The deployment of new application revisions is through a CI/CD pipeline. If the service's latency increases beyond a defined threshold, deployment should be stopped until the service has recovered.

Which of the following methods allow for the QUICKEST detection time?

A.
Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
A.
Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Answers
B.
Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
B.
Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Answers
C.
Use AWS CodeDeploy's MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
C.
Use AWS CodeDeploy's MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
Answers
D.
Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
D.
Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Answers
Suggested answer: C

A company has multiple environments that run applications on Amazon EC2 instances. The company wants to track costs and has defined a new rule that states that all production EC2 instances must be tagged with a CostCenter tag. A DevOps engineer has created a tag policy to validate the use of the CostCenter tag, has activated the option to prevent noncompliant tagging operations for this tag, and has attached the policy to the production OU in AWS Organizations. The DevOps engineer generates a compliance report for the entire organization and ensures that all the deployed instances have the correct tags configured. The DevOps engineer also verifies that the CostCenter tag cannot be removed from an EC2 instance that runs in one of the production accounts.

After some time, the DevOps engineer notices that several EC2 instances have been launched in the production accounts without the configuration of the CostCenter tag. What should the DevOps engineer do to ensure that all production EC2 instances are launched with the CostCenter tag configured?

A.
Attach the tag policy to the organization root to ensure that the policy applies to all EC2 instances.
A.
Attach the tag policy to the organization root to ensure that the policy applies to all EC2 instances.
Answers
B.
Create an SCP that requires the CostCenter tag during the launch of EC2 instances.
B.
Create an SCP that requires the CostCenter tag during the launch of EC2 instances.
Answers
C.
In the AWS Billing and Cost Management console of the management account, activate the CostCenter tag as a cost allocation tag.
C.
In the AWS Billing and Cost Management console of the management account, activate the CostCenter tag as a cost allocation tag.
Answers
D.
Activate the AWS Config required-tags managed rule in all production accounts. Ensure that the rule evaluates the CostCenter tag.
D.
Activate the AWS Config required-tags managed rule in all production accounts. Ensure that the rule evaluates the CostCenter tag.
Answers
Suggested answer: C

Explanation:

After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs.

Reference: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

Ansible provides some methods for controlling how or when a task is ran. Which of the following is a valid method for controlling a task with a loop?

A.
- with:
A.
- with:
Answers
B.
- with_items:
B.
- with_items:
Answers
C.
- only_when:
C.
- only_when:
Answers
D.
- items:
D.
- items:
Answers
Suggested answer: B

Explanation:

Ansible provides two methods for controlling tasks, loops and conditionals. The "with_items" context will allow the task to loop through a list of items, while the `when' context will allow a conditional requirement to be met for the task to run. Both can be used at the same time.

Reference: http://docs.ansible.com/ansible/playbooks_conditionals.html#loops-and-conditionals

You want to set up the CloudTrail Processing Library to log your bucket operations. Which command will build a .jar file from the CloudTrail Processing Library source code?

A.
mvn javac mvn -install processor
A.
mvn javac mvn -install processor
Answers
B.
jar install processor
B.
jar install processor
Answers
C.
build jar -Dgpg.processor
C.
build jar -Dgpg.processor
Answers
D.
mvn clean install -Dgpg.skip=true
D.
mvn clean install -Dgpg.skip=true
Answers
Suggested answer: D

Explanation:

The CloudTrail Processing Library is a Java library that provides an easy way to process AWS CloudTrail logs in a faulttolerant, scalable and flexible way. To set up the CloudTrail Processing Library, you first need to download CloudTrail Processing Library source from GitHub. You can then create the .jar file using this command.

Reference: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/use-the-cloudtrail-processinglibrary.html

A company has a single Developer writing code for an automated deployment pipeline. The Developer is storing source code in an Amazon S3 bucket for each project. The company wants to add more Developers to the team but is concerned about code conflicts and lost work. The company also wants to build a test environment to deploy newer versions of code for testing and allow Developers to automatically deploy to both environments when code is changed in the repository.

What is the MOST efficient way to meet these requirements?

A.
Create an AWS CodeCommit repository for each project, use the master branch for production code, and create a testing branch for code deployed to testing. Use feature branches to develop new features and pull requests to merge code to testing and master branches.
A.
Create an AWS CodeCommit repository for each project, use the master branch for production code, and create a testing branch for code deployed to testing. Use feature branches to develop new features and pull requests to merge code to testing and master branches.
Answers
B.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets. Enable versioning on all buckets to prevent code conflicts.
B.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets. Enable versioning on all buckets to prevent code conflicts.
Answers
C.
Create an AWS CodeCommit repository for each project, and use the master branch for production and test code with different deployment pipelines for each environment. Use feature branches to develop new features.
C.
Create an AWS CodeCommit repository for each project, and use the master branch for production and test code with different deployment pipelines for each environment. Use feature branches to develop new features.
Answers
D.
Enable versioning and branching on each S3 bucket, use the master branch for production code, and create a testing branch for code deployed to testing. Have Developers use each branch for developing in each environment.
D.
Enable versioning and branching on each S3 bucket, use the master branch for production code, and create a testing branch for code deployed to testing. Have Developers use each branch for developing in each environment.
Answers
Suggested answer: A

Your application requires a fault-tolerant, low-latency and repeatable method to load configurations files via Auto Scaling when Amazon Elastic Compute Cloud (EC2) instances launch. Which approach should you use to satisfy these requirements?

A.
Securely copy the content from a running Amazon EC2 instance.
A.
Securely copy the content from a running Amazon EC2 instance.
Answers
B.
Use an Amazon EC2 UserData script to copy the configurations from an Amazon Storage Services (S3) bucket.
B.
Use an Amazon EC2 UserData script to copy the configurations from an Amazon Storage Services (S3) bucket.
Answers
C.
Use a script via cfn-init to pull content hosted in an Amazon ElastiCache cluster.
C.
Use a script via cfn-init to pull content hosted in an Amazon ElastiCache cluster.
Answers
D.
Use a script via cfn-init to pull content hosted on your on-premises server.
D.
Use a script via cfn-init to pull content hosted on your on-premises server.
Answers
E.
Use an Amazon EC2 UserData script to pull content hosted on your on-premises server.
E.
Use an Amazon EC2 UserData script to pull content hosted on your on-premises server.
Answers
Suggested answer: B

Which of these is not a CloudFormation Helper Script?

A.
cfn-signal
A.
cfn-signal
Answers
B.
cfn-hup
B.
cfn-hup
Answers
C.
cfn-request
C.
cfn-request
Answers
D.
cfn-get-metadata
D.
cfn-get-metadata
Answers
Suggested answer: C

Explanation:

This is the complete list of CloudFormation Helper Scripts: cfn-init, cfn-signal, cfn-get-metadata, cfn-hup Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scriptsreference.html

Total 557 questions
Go to page: of 56