ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 54

Question list
Search
Search

List of questions

Search

Related questions











What is the scope of an EBS volume?

A.
VPC
A.
VPC
Answers
B.
Region
B.
Region
Answers
C.
Placement Group
C.
Placement Group
Answers
D.
Availability Zone
D.
Availability Zone
Answers
Suggested answer: D

Explanation:

An Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.html

A company hosts its staging website using an Amazon EC2 instance backed with Amazon EBS storage. The company wants to recover quickly with minimal data losses in the event of network connectivity issues or power failures on the EC2 instance. Which solution will meet these requirements?

A.
Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.
A.
Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.
Answers
B.
Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.
B.
Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.
Answers
C.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric and select the EC2 action to recover the instance.
C.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric and select the EC2 action to recover the instance.
Answers
D.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_Instance metric and select the EC2 action to reboot the instance.
D.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_Instance metric and select the EC2 action to reboot the instance.
Answers
Suggested answer: A

Explanation:

Reference: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html

You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?

A.
6667
A.
6667
Answers
B.
4166
B.
4166
Answers
C.
5556
C.
5556
Answers
D.
2778
D.
2778
Answers
Suggested answer: C

Explanation:

You need 2 units to make a 1.5KB write, since you round up. You need 20 million total units to perform this load. You have 3600 seconds to do so. Divide and round up for 5556.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedT hroughput.html

A retail company has adopted AWS OpsWorks for managing its deployments. In the last three months, the company has discovered that some production instances have been restarting without reason. Upon inspection of the AWS CloudTrail logs, a DevOps Engineer determined that those instances were restarted by OpsWorks. The Engineer now wants automated email notifications whenever OpsWorks restarts an instance when the instance is deemed unhealthy or unable to communicate with the service endpoint. How can the Engineer meet this requirement?

A.
Create a Chef recipe to place a cron to run a custom script within the Amazon EC2 instances that sends an email to the team by using Amazon SES if the OpsWorks agent detects an instance failure.
A.
Create a Chef recipe to place a cron to run a custom script within the Amazon EC2 instances that sends an email to the team by using Amazon SES if the OpsWorks agent detects an instance failure.
Answers
B.
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule: specify aws.opsworks as a source and specify auto-healing in the initiated_by details. Use the SNS topic as a target.
B.
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule: specify aws.opsworks as a source and specify auto-healing in the initiated_by details. Use the SNS topic as a target.
Answers
C.
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule: specify aws.opsworks as a source and specify instance-replacement in the initiated_by details.Use the SNS topic as a target.
C.
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule: specify aws.opsworks as a source and specify instance-replacement in the initiated_by details.Use the SNS topic as a target.
Answers
D.
Create a subscription for this topic that contains the email address. Enable instance restart notifications within the OpsWorks layer and indicate the destination email address for the notification.
D.
Create a subscription for this topic that contains the email address. Enable instance restart notifications within the OpsWorks layer and indicate the destination email address for the notification.
Answers
Suggested answer: B

An application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). A DevOps Engineer is using AWS CodeDeploy to release a new version. The deployment fails during the AllowTraffic lifecycle event, but a cause for the failure is not indicated in the deployment logs.

What would cause this?

A.
The appspec.yml file contains an invalid script to execute in the AllowTraffic lifecycle hook.
A.
The appspec.yml file contains an invalid script to execute in the AllowTraffic lifecycle hook.
Answers
B.
The user who initiated the deployment does not have the necessary permissions to interact with the ALB.
B.
The user who initiated the deployment does not have the necessary permissions to interact with the ALB.
Answers
C.
The health checks specified for the ALB target group are misconfigured.
C.
The health checks specified for the ALB target group are misconfigured.
Answers
D.
The CodeDeploy agent was not installed in the EC2 instances that are part of the ALB target group.
D.
The CodeDeploy agent was not installed in the EC2 instances that are part of the ALB target group.
Answers
Suggested answer: C

Explanation:

Reference: https://docs.amazonaws.cn/en_us/codedeploy/latest/userguide/codedeploy-user.pdf (399)

You have been tasked with implementing an automated data backup solution for your application servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour. How can you implement this through a script that a scheduling daemon runs daily on the application servers?

A.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date time group, and copy backup data to a second Amazon EBS volume. Use the ec2-describe-volumes API to enumerate existing backup volumes. Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-tine group older than 30 days.
A.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date time group, and copy backup data to a second Amazon EBS volume. Use the ec2-describe-volumes API to enumerate existing backup volumes. Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-tine group older than 30 days.
Answers
B.
Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group. Use the list vaults API to enumerate existing backup archives Call the delete vault API to prune backup archives that are tagged with a date-time group older than 30 days.
B.
Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group. Use the list vaults API to enumerate existing backup archives Call the delete vault API to prune backup archives that are tagged with a date-time group older than 30 days.
Answers
C.
Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots. Call the ec2-delete-snapShot API to prune Amazon EBS snapshots that are tagged with a datetime group older than 30 days.
C.
Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots. Call the ec2-delete-snapShot API to prune Amazon EBS snapshots that are tagged with a datetime group older than 30 days.
Answers
D.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to back up data to the new Amazon EBS volume. Use the ec2- describe-snapshot API to enumerate existing backup volumes. Call the ec2-delete-snaphot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days.
D.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to back up data to the new Amazon EBS volume. Use the ec2- describe-snapshot API to enumerate existing backup volumes. Call the ec2-delete-snaphot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days.
Answers
Suggested answer: C

You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency for database response times. Performance is the most important concern, not availability. You did not perform the initial setup, someone without much AWS knowledge did, so you are not sure if they configured everything optimally. Which of the following is NOT likely to be an issue contributing to increased latency?

A.
The EC2 instances are not EBS Optimized.
A.
The EC2 instances are not EBS Optimized.
Answers
B.
The database and requesting system are both in the wrong Availability Zone.
B.
The database and requesting system are both in the wrong Availability Zone.
Answers
C.
The EBS Volumes are not using PIOPS.
C.
The EBS Volumes are not using PIOPS.
Answers
D.
The database is not running in a placement group.
D.
The database is not running in a placement group.
Answers
Suggested answer: B

Explanation:

For the highest possible performance, all instances in a clustered database like this one should be in a single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD EBS Volumes. The particular Availability Zone the system is running in should not be important, as long as it is the same as the requesting resources.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

A company has containerized all of its in-house quality control applications. The company is running Jenkins on Amazon EC2, which requires patching and upgrading. The Compliance Officer has requested a DevOps Engineer begin encrypting build artifacts since they contain company intellectual property.

What should the DevOps Engineer do to accomplish this in the MOST maintainable manner?

A.
Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.
A.
Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.
Answers
B.
Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.
B.
Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.
Answers
C.
Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.
C.
Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.
Answers
D.
Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on Amazon EC2.
D.
Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on Amazon EC2.
Answers
Suggested answer: D

Which of these configuration or deployment practices is a security risk for RDS?

A.
Storing SQL function code in plaintext
A.
Storing SQL function code in plaintext
Answers
B.
Non-Multi-AZ RDS instance
B.
Non-Multi-AZ RDS instance
Answers
C.
Having RDS and EC2 instances exist in the same subnet
C.
Having RDS and EC2 instances exist in the same subnet
Answers
D.
RDS in a public subnet
D.
RDS in a public subnet
Answers
Suggested answer: D

Explanation:

Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable. DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html

A DevOps Engineer is setting up a container-based architecture. The Engineer has decided to use AWS CloudFormation to automatically provision an Amazon ECS cluster and an Amazon EC2 Auto Scaling group to launch the EC2 container instances. After successfully creating the CloudFormation stack, the Engineer noticed that, even though the ECS cluster and the EC2 instances were created successfully and the stack finished the creation, the EC2 instances were associating with a different cluster.

How should the DevOps Engineer update the CloudFormation template to resolve this issue?

A.
Reference the EC2 instances in the AWS::ECS::Cluster resource and reference the ECS cluster in the AWS::ECS::Service resource.
A.
Reference the EC2 instances in the AWS::ECS::Cluster resource and reference the ECS cluster in the AWS::ECS::Service resource.
Answers
B.
Reference the ECS cluster in the AWS::AutoScaling::LaunchConfiguration resource of the UserData property.
B.
Reference the ECS cluster in the AWS::AutoScaling::LaunchConfiguration resource of the UserData property.
Answers
C.
Reference the ECS cluster in the AWS::EC2::Instance resource of the UserData property.
C.
Reference the ECS cluster in the AWS::EC2::Instance resource of the UserData property.
Answers
D.
Reference the ECS cluster in the AWS::CloudFormation::CustomResource resource to trigger an AWS Lambda function that registers the EC2 instances with the appropriate ECS cluster.
D.
Reference the ECS cluster in the AWS::CloudFormation::CustomResource resource to trigger an AWS Lambda function that registers the EC2 instances with the appropriate ECS cluster.
Answers
Suggested answer: B

Explanation:

Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html

Total 557 questions
Go to page: of 56