ExamGecko
Home / Amazon / DOP-C01 / List of questions
Ask Question

Amazon DOP-C01 Practice Test - Questions Answers, Page 22

List of questions

Question 211

Report
Export
Collapse

What is the maximum supported single-volume throughput on EBS?

320MiB/s
320MiB/s
160MiB/s
160MiB/s
40MiB/s
40MiB/s
640MiB/s
640MiB/s
Suggested answer: A

Explanation:

The ceiling throughput for PIOPS on EBS is 320MiB/s.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

asked 16/09/2024
John Johnson
29 questions

Question 212

Report
Export
Collapse

You have decided that you need to change the instance type of your production instances which are running as part of an AutoScaling group. The entire architecture is deployed using CloudFormation Template. You currently have 4 instances in Production. You cannot have any interruption in service and need to ensure 2 instances are always runningduring the update. Which of the options below listed can be used for this?

AutoScalingRollingUpdate
AutoScalingRollingUpdate
AutoScalingScheduledAction
AutoScalingScheduledAction
AutoScalingReplacingUpdate
AutoScalingReplacingUpdate
AutoScalinglntegrationUpdate
AutoScalinglntegrationUpdate
Suggested answer: A

Explanation:

The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePoIicy attribute. This is used to define how an Auto Scalinggroup resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on Autoscaling updates, please refer to the below link.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

asked 16/09/2024
Alan How
35 questions

Question 213

Report
Export
Collapse

A company is building a solution for storing files containing Personally Identifiable Information (PII) on AWS. Requirements state:

All data must be encrypted at rest and in transit.

All data must be replicated in at least two locations that are at least 500 miles apart.

Which solution meets these requirements?

Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSECon all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSECon all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSES3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSES3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3- Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3- Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMSencryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMSCustomer Master Key (CMK) in the primary region for encrypting objects.
Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMSencryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMSCustomer Master Key (CMK) in the primary region for encrypting objects.
Suggested answer: B
asked 16/09/2024
Miguel Monteiro
38 questions

Question 214

Report
Export
Collapse

A DevOps Engineer must track the health of a stateless RESTful service sitting behind a Classic ILoad Balancer. The deployment of new application revisions is through a CI/CD pipeline. If the service's latency increases beyond a defined threshold, deployment should be stopped until the service has recovered.

Which of the following methods allow for the QUICKEST detection time?

Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Use AWS CodeDeploy's MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
Use AWS CodeDeploy's MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Suggested answer: C
asked 16/09/2024
Minh Phan
29 questions

Question 215

Report
Export
Collapse

A company has multiple environments that run applications on Amazon EC2 instances. The company wants to track costs and has defined a new rule that states that all production EC2 instances must be tagged with a CostCenter tag. A DevOps engineer has created a tag policy to validate the use of the CostCenter tag, has activated the option to prevent noncompliant tagging operations for this tag, and has attached the policy to the production OU in AWS Organizations. The DevOps engineer generates a compliance report for the entire organization and ensures that all the deployed instances have the correct tags configured. The DevOps engineer also verifies that the CostCenter tag cannot be removed from an EC2 instance that runs in one of the production accounts.

After some time, the DevOps engineer notices that several EC2 instances have been launched in the production accounts without the configuration of the CostCenter tag. What should the DevOps engineer do to ensure that all production EC2 instances are launched with the CostCenter tag configured?

Attach the tag policy to the organization root to ensure that the policy applies to all EC2 instances.
Attach the tag policy to the organization root to ensure that the policy applies to all EC2 instances.
Create an SCP that requires the CostCenter tag during the launch of EC2 instances.
Create an SCP that requires the CostCenter tag during the launch of EC2 instances.
In the AWS Billing and Cost Management console of the management account, activate the CostCenter tag as a cost allocation tag.
In the AWS Billing and Cost Management console of the management account, activate the CostCenter tag as a cost allocation tag.
Activate the AWS Config required-tags managed rule in all production accounts. Ensure that the rule evaluates the CostCenter tag.
Activate the AWS Config required-tags managed rule in all production accounts. Ensure that the rule evaluates the CostCenter tag.
Suggested answer: C

Explanation:

After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs.

Reference: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

asked 16/09/2024
Unai M
39 questions

Question 216

Report
Export
Collapse

Ansible provides some methods for controlling how or when a task is ran. Which of the following is a valid method for controlling a task with a loop?

- with:
- with:
- with_items:
- with_items:
- only_when:
- only_when:
- items:
- items:
Suggested answer: B

Explanation:

Ansible provides two methods for controlling tasks, loops and conditionals. The "with_items" context will allow the task to loop through a list of items, while the `when' context will allow a conditional requirement to be met for the task to run. Both can be used at the same time.

Reference: http://docs.ansible.com/ansible/playbooks_conditionals.html#loops-and-conditionals

asked 16/09/2024
Christopher Castillo
35 questions

Question 217

Report
Export
Collapse

You want to set up the CloudTrail Processing Library to log your bucket operations. Which command will build a .jar file from the CloudTrail Processing Library source code?

mvn javac mvn -install processor
mvn javac mvn -install processor
jar install processor
jar install processor
build jar -Dgpg.processor
build jar -Dgpg.processor
mvn clean install -Dgpg.skip=true
mvn clean install -Dgpg.skip=true
Suggested answer: D

Explanation:

The CloudTrail Processing Library is a Java library that provides an easy way to process AWS CloudTrail logs in a faulttolerant, scalable and flexible way. To set up the CloudTrail Processing Library, you first need to download CloudTrail Processing Library source from GitHub. You can then create the .jar file using this command.

Reference: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/use-the-cloudtrail-processinglibrary.html

asked 16/09/2024
Kina Collins
37 questions

Question 218

Report
Export
Collapse

A company has a single Developer writing code for an automated deployment pipeline. The Developer is storing source code in an Amazon S3 bucket for each project. The company wants to add more Developers to the team but is concerned about code conflicts and lost work. The company also wants to build a test environment to deploy newer versions of code for testing and allow Developers to automatically deploy to both environments when code is changed in the repository.

What is the MOST efficient way to meet these requirements?

Create an AWS CodeCommit repository for each project, use the master branch for production code, and create a testing branch for code deployed to testing. Use feature branches to develop new features and pull requests to merge code to testing and master branches.
Create an AWS CodeCommit repository for each project, use the master branch for production code, and create a testing branch for code deployed to testing. Use feature branches to develop new features and pull requests to merge code to testing and master branches.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets. Enable versioning on all buckets to prevent code conflicts.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets. Enable versioning on all buckets to prevent code conflicts.
Create an AWS CodeCommit repository for each project, and use the master branch for production and test code with different deployment pipelines for each environment. Use feature branches to develop new features.
Create an AWS CodeCommit repository for each project, and use the master branch for production and test code with different deployment pipelines for each environment. Use feature branches to develop new features.
Enable versioning and branching on each S3 bucket, use the master branch for production code, and create a testing branch for code deployed to testing. Have Developers use each branch for developing in each environment.
Enable versioning and branching on each S3 bucket, use the master branch for production code, and create a testing branch for code deployed to testing. Have Developers use each branch for developing in each environment.
Suggested answer: A
asked 16/09/2024
shubha sunil
36 questions

Question 219

Report
Export
Collapse

Your application requires a fault-tolerant, low-latency and repeatable method to load configurations files via Auto Scaling when Amazon Elastic Compute Cloud (EC2) instances launch. Which approach should you use to satisfy these requirements?

Securely copy the content from a running Amazon EC2 instance.
Securely copy the content from a running Amazon EC2 instance.
Use an Amazon EC2 UserData script to copy the configurations from an Amazon Storage Services (S3) bucket.
Use an Amazon EC2 UserData script to copy the configurations from an Amazon Storage Services (S3) bucket.
Use a script via cfn-init to pull content hosted in an Amazon ElastiCache cluster.
Use a script via cfn-init to pull content hosted in an Amazon ElastiCache cluster.
Use a script via cfn-init to pull content hosted on your on-premises server.
Use a script via cfn-init to pull content hosted on your on-premises server.
Use an Amazon EC2 UserData script to pull content hosted on your on-premises server.
Use an Amazon EC2 UserData script to pull content hosted on your on-premises server.
Suggested answer: B
asked 16/09/2024
Yves ADINGNI
37 questions

Question 220

Report
Export
Collapse

Which of these is not a CloudFormation Helper Script?

cfn-signal
cfn-signal
cfn-hup
cfn-hup
cfn-request
cfn-request
cfn-get-metadata
cfn-get-metadata
Suggested answer: C

Explanation:

This is the complete list of CloudFormation Helper Scripts: cfn-init, cfn-signal, cfn-get-metadata, cfn-hup Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scriptsreference.html

asked 16/09/2024
Komalaharshini Basireddygari
42 questions
Total 557 questions
Go to page: of 56
Search

Related questions