ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











You are designing an enterprise data storage system. Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as lowcost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads. Which is the most appropriate EBS Volume Type for this scenario?

A.
gp1
A.
gp1
Answers
B.
io1
B.
io1
Answers
C.
standard
C.
standard
Answers
D.
gp2
D.
gp2
Answers
Suggested answer: C

Explanation:

Standard volumes, or Magnetic volumes, are best for: Cold workloads where data is infrequently accessed, or scenarios where the lowest storage cost is important.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3- tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?

A.
Check your CloudTrail log history around the spike's time for any API calls that caused slowness.
A.
Check your CloudTrail log history around the spike's time for any API calls that caused slowness.
Answers
B.
Review CloudWatch Metrics graphs to determine which component(s) slowed the system down.
B.
Review CloudWatch Metrics graphs to determine which component(s) slowed the system down.
Answers
C.
Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
C.
Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
Answers
D.
Analyze your logs to detect bursts in traffic at that time.
D.
Analyze your logs to detect bursts in traffic at that time.
Answers
Suggested answer: B

Explanation:

Metrics data are available for 2 weeks. If you want to store metrics data beyond that duration, you can retrieve it using our GetMetricStatistics API as well as a number of applications and tools offered by AWS partners.

Reference: https://aws.amazon.com/cloudwatch/faqs/

Which command will start an assessment run?

A.
aws inspector start-assessment-run --assessment-template-arn
A.
aws inspector start-assessment-run --assessment-template-arn
Answers
B.
aws inspector start-assessment-run --assessment-run-name examplerun --assessment-target
B.
aws inspector start-assessment-run --assessment-run-name examplerun --assessment-target
Answers
C.
aws inspector start-assessment-run --assessment-run-name examplerun
C.
aws inspector start-assessment-run --assessment-run-name examplerun
Answers
D.
aws inspector start-assessment-run --assessment-run-name examplerun --assessment-duration
D.
aws inspector start-assessment-run --assessment-run-name examplerun --assessment-duration
Answers
Suggested answer: A

Explanation:

Explanation: start-assessment-run command requires --assessment-template-arn, other parameters are optional start-assessment-run --assessment-template-arn [--assessment-run-name ] [--cli-input-json ] [--generate-cli-skeleton ]

Reference: http://docs.aws.amazon.com/cli/latest/reference/inspector/start-assessment-run.html

Which tool will Ansible not use, even if available, to gather facts?

A.
facter
A.
facter
Answers
B.
lsb_release
B.
lsb_release
Answers
C.
Ansible setup module
C.
Ansible setup module
Answers
D.
ohai
D.
ohai
Answers
Suggested answer: B

Explanation:

Ansible will use its own `setup' module to gather facts for the local system. Additionally, if ohai or facter are installed, those will also be used and all variables will be prefixed with `ohai_' or `facter_' respectively. `lsb_relase' is a Linux tool for determining distribution information.

Reference: http://docs.ansible.com/ansible/setup_module.html

An Information Security policy requires that all publicly accessible systems be patched with critical OS security patches within 24 hours of a patch release. All instances are tagged with the Patch Group key set to 0. Two new AWS Systems Manager patch baselines for Windows and Red Hat Enterprise Linux (RHEL) with zero-day delay for security patches of critical severity were created with an auto-approval rule. Patch Group 0 has been associated with the new patch baselines.

Which two steps will automate patch compliance and reporting? (Choose two.)

A.
Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-InstallWindowsUpdates document with a daily schedule.
A.
Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-InstallWindowsUpdates document with a daily schedule.
Answers
B.
Create an AWS Systems Manager Maintenance Window with a daily schedule and add a target with Patch Group 0. Add a task that runs the AWS-RunPatchBaseline document with the Install action.
B.
Create an AWS Systems Manager Maintenance Window with a daily schedule and add a target with Patch Group 0. Add a task that runs the AWS-RunPatchBaseline document with the Install action.
Answers
C.
Create an AWS Systems Manager State Manager configuration. Associate the AWS-RunPatchBaseline task with the configuration and add a target with Patch Group 0.
C.
Create an AWS Systems Manager State Manager configuration. Associate the AWS-RunPatchBaseline task with the configuration and add a target with Patch Group 0.
Answers
D.
Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-ApplyPatchBaseline document with a daily schedule.
D.
Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-ApplyPatchBaseline document with a daily schedule.
Answers
E.
Use the AWS Systems Manager Run Command to associate the AWS-ApplyPatchBaseline document with instances tagged with Patch Group 0.
E.
Use the AWS Systems Manager Run Command to associate the AWS-ApplyPatchBaseline document with instances tagged with Patch Group 0.
Answers
Suggested answer: B, C

Explanation:

Reference:

https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-manager/

What is the order of most-to-least rapidly-scaling (fastest to scale first)?

A.
B, A, C
A.
B, A, C
Answers
B.
C, B, A
B.
C, B, A
Answers
C.
C, A, B
C.
C, A, B
Answers
D.
A, C, B
D.
A, C, B
Answers
Suggested answer: A

Explanation:

Lambda is designed to scale instantly. EC2 + ELB + Auto Scaling require single-digit minutes to scale out. RDS will take at least 15 minutes, and will apply OS patches or any other updates when applied.

Reference: https://aws.amazon.com/lambda/faqs/

A company wants to use a grid system for a proprietary enterprise in-memory data store on top of AWS. This system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes, an /etc./cluster/nodes.config file must be updated, listing the IP addresses of the current node members of that cluster The company wants to automate the task of adding new nodes to a cluster. What can a DevOps Engineer do to meet these requirements?

A.
Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the /etc/cluster/nodes.config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.
A.
Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the /etc/cluster/nodes.config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.
Answers
B.
Put the file nodes.config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for the cluster nodes. When adding a new node to the cluster, update the file with all tagged instances, and make a commit in version control. Deploy the new file and restart the services.
B.
Put the file nodes.config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for the cluster nodes. When adding a new node to the cluster, update the file with all tagged instances, and make a commit in version control. Deploy the new file and restart the services.
Answers
C.
Create an Amazon S3 bucket and upload a version of the etc/cluster/nodes.config file. Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager, such as Monit or systemd, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster, edit the file's most recent members. Upload the new file to the S3 bucket.
C.
Create an Amazon S3 bucket and upload a version of the etc/cluster/nodes.config file. Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager, such as Monit or systemd, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster, edit the file's most recent members. Upload the new file to the S3 bucket.
Answers
D.
Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/nodes.config file whenever a new instance is added to the cluster
D.
Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/nodes.config file whenever a new instance is added to the cluster
Answers
Suggested answer: A

Management has reported an increase in the monthly bill from Amazon web services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost. How can you provide management with a better insight into data transfer use?

A.
Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.
A.
Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.
Answers
B.
Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.
B.
Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.
Answers
C.
Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.
C.
Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.
Answers
D.
Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.
D.
Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.
Answers
Suggested answer: C

You are building a Ruby on Rails application for internal, non-production use which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?

A.
AWS CloudFormation
A.
AWS CloudFormation
Answers
B.
AWS OpsWorks
B.
AWS OpsWorks
Answers
C.
AWS ELB + EC2 with CLI Push
C.
AWS ELB + EC2 with CLI Push
Answers
D.
AWS Elastic Beanstalk
D.
AWS Elastic Beanstalk
Answers
Suggested answer: D

Explanation:

Elastic Beanstalk's primary mode of operation exactly supports this use case out of the box. It is simpler than all the other options for this question. With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Ruby_rails.html

A production account has a requirement that any Amazon EC2 instance that has been logged into manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with Amazon CloudWatch Logs agent configured. How can this process be automated?

A.
Create a CloudWatch Logs subscription to an AWS Step Functions application. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Then create a CloudWatch Events rule to trigger a second AWS Lambda function once a day that will terminate all instances with this tag.
A.
Create a CloudWatch Logs subscription to an AWS Step Functions application. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Then create a CloudWatch Events rule to trigger a second AWS Lambda function once a day that will terminate all instances with this tag.
Answers
B.
Create a CloudWatch alarm that will trigger on the login event. Send the notification to an Amazon SNS topic that the Operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
B.
Create a CloudWatch alarm that will trigger on the login event. Send the notification to an Amazon SNS topic that the Operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
Answers
C.
Create a CloudWatch alarm that will trigger on the login event. Configure the alarm to send to an Amazon SQS queue. Use a group of worker instances to process messages from the queue, which then schedules the Amazon CloudWatch Events rule to trigger.
C.
Create a CloudWatch alarm that will trigger on the login event. Configure the alarm to send to an Amazon SQS queue. Use a group of worker instances to process messages from the queue, which then schedules the Amazon CloudWatch Events rule to trigger.
Answers
D.
Create a CloudWatch Logs subscription in an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create a CloudWatch Events rule to trigger a daily Lambda function that terminates all instances with this tag.
D.
Create a CloudWatch Logs subscription in an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create a CloudWatch Events rule to trigger a daily Lambda function that terminates all instances with this tag.
Answers
Suggested answer: D
Total 557 questions
Go to page: of 56