ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 9

Question list
Search
Search

List of questions

Search

Related questions











A company wants to adopt a methodology for handling security threats from leaked and compromised IAM access keys. The DevOps Engineer has been asked to automate the process of acting upon compromised access keys, which includes identifying users, revoking their permissions, and sending a notification to the Security team. Which of the following would achieve this goal?

A.
Use the AWS Trusted Advisor generated security report for access keys. Use Amazon EMR to run analytics on the report. Identify compromised IAM access keys and delete them. Use Amazon CloudWatch with an EMR Cluster State Change event to notify the Security team.
A.
Use the AWS Trusted Advisor generated security report for access keys. Use Amazon EMR to run analytics on the report. Identify compromised IAM access keys and delete them. Use Amazon CloudWatch with an EMR Cluster State Change event to notify the Security team.
Answers
B.
Use AWS Trusted Advisor to identify compromised access keys. Create an Amazon CloudWatch Events rule with Trusted Advisor as the event source, and AWS Lambda and Amazon SNS as targets. Use AWS Lambda to delete compromised IAM access keys and Amazon SNS to notify the Security team.
B.
Use AWS Trusted Advisor to identify compromised access keys. Create an Amazon CloudWatch Events rule with Trusted Advisor as the event source, and AWS Lambda and Amazon SNS as targets. Use AWS Lambda to delete compromised IAM access keys and Amazon SNS to notify the Security team.
Answers
C.
Use the AWS Trusted Advisor generated security report for access keys. Use AWS Lambda to scan through the report. Use scan result inside AWS Lambda and delete compromised IAM access keys. Use Amazon SNS to notify the Security team.
C.
Use the AWS Trusted Advisor generated security report for access keys. Use AWS Lambda to scan through the report. Use scan result inside AWS Lambda and delete compromised IAM access keys. Use Amazon SNS to notify the Security team.
Answers
D.
Use AWS Lambda with a third-party library to scan for compromised access keys. Use scan result inside AWS Lambda and delete compromised IAM access keys. Create Amazon CloudWatch custom metrics for compromised keys. Create a CloudWatch alarm on the metrics to notify the Security team.
D.
Use AWS Lambda with a third-party library to scan for compromised access keys. Use scan result inside AWS Lambda and delete compromised IAM access keys. Create Amazon CloudWatch custom metrics for compromised keys. Create a CloudWatch alarm on the metrics to notify the Security team.
Answers
Suggested answer: B

Explanation:

Reference: https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

For AWS CloudFormation, which is true?

A.
Custom resources using SNS have a default timeout of 3 minutes.
A.
Custom resources using SNS have a default timeout of 3 minutes.
Answers
B.
Custom resources using SNS do not need a ServiceToken property.
B.
Custom resources using SNS do not need a ServiceToken property.
Answers
C.
Custom resources using Lambda and Code.ZipFile allow inline nodejs resource composition.
C.
Custom resources using Lambda and Code.ZipFile allow inline nodejs resource composition.
Answers
D.
Custom resources using Lambda do not need a ServiceTokenproperty
D.
Custom resources using Lambda do not need a ServiceTokenproperty
Answers
Suggested answer: C

Explanation:

Code is a property of the AWS::Lambda::Function resource that enables to you specify the source code of an AWS Lambda (Lambda) function. You can point to a file in an Amazon Simple Storage Service (Amazon S3) bucket or specify your source code as inline text (for nodejs runtime environments only).

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-customresources.html

After a data leakage incident that led to thousands of stolen user profiles, a compliance officer is demanding automatic, auditable security policy checks for all of the company's data stores, starting with public access of Amazon S3 buckets. Which solution will accomplish this with the LEAST amount of effort?

A.
Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Use a remediation action to trigger an AWS Lambda function that automatically disables public access.
A.
Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Use a remediation action to trigger an AWS Lambda function that automatically disables public access.
Answers
B.
Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Trigger an AWS Lambda function that automatically disables public access.
B.
Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Trigger an AWS Lambda function that automatically disables public access.
Answers
C.
Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure a remediation action that automatically disables public access.
C.
Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure a remediation action that automatically disables public access.
Answers
D.
Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure an AWS Lambda function that automatically disables public access.
D.
Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure an AWS Lambda function that automatically disables public access.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-public-read-prohibited.html

You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this?

A.
Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production.
A.
Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production.
Answers
B.
Create two templates, one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production.
B.
Create two templates, one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production.
Answers
C.
Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production.
C.
Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production.
Answers
D.
Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.
D.
Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.
Answers
Suggested answer: A

Explanation:

The best way to do this is with one template, and a Condition on the resource. Route53 does not allow null strings for records.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-sectionstructure.html

Ansible supports running Playbook on the host directly or via SSH. How can Ansible be told to run its playbooks directly on the host?

A.
Setting ‘connection: local’ in the tasks that run locally.
A.
Setting ‘connection: local’ in the tasks that run locally.
Answers
B.
Specifying ‘-type local’ on the command line.
B.
Specifying ‘-type local’ on the command line.
Answers
C.
It does not need to be specified; it is the default.
C.
It does not need to be specified; it is the default.
Answers
D.
Setting ‘connection: local’ in the Playbook.
D.
Setting ‘connection: local’ in the Playbook.
Answers
Suggested answer: D

Explanation:

Ansible can be told to run locally on the command line with the ‘-c’ option or can be told via the ‘connection: local’ declaration in the playbook. The default connection method is ‘remote’.

Reference: http://docs.ansible.com/ansible/intro_inventory.html#non-ssh-connection-types

A company is deploying a new application that uses Amazon EC2 instances. The company needs a solution to query application logs and AWS account API activity. Which solution will meet these requirements?

A.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to Amazon S3. Use CloudWatch to query both sets of logs.
A.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to Amazon S3. Use CloudWatch to query both sets of logs.
Answers
B.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to CloudWatch Logs. Use CloudWatch Logs Insights to query both sets of logs.
B.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to CloudWatch Logs. Use CloudWatch Logs Insights to query both sets of logs.
Answers
C.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon Kinesis. Configure AWS CloudTrail to deliver the API logs to Kinesis. Use Kinesis to load the data into Amazon Redshift. Use Amazon Redshift to query both sets of logs.
C.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon Kinesis. Configure AWS CloudTrail to deliver the API logs to Kinesis. Use Kinesis to load the data into Amazon Redshift. Use Amazon Redshift to query both sets of logs.
Answers
D.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon S3. Use AWS CloudTrail to deliver the API logs to Amazon S3 Amazon S3. Use Amazon Athena to query both sets of logs in Amazon S3.
D.
Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon S3. Use AWS CloudTrail to deliver the API logs to Amazon S3 Amazon S3. Use Amazon Athena to query both sets of logs in Amazon S3.
Answers
Suggested answer: D

A rapidly growing company wants to scale for Developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The Networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets. The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables.

To keep up with the demand, the DevOps Engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments. Which approach will meet these requirements and quickly provide consistent AWS environments for Developers?

A.
Use Fn:ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. use the UpdateStackSet command to update existing development environments.
A.
Use Fn:ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. use the UpdateStackSet command to update existing development environments.
Answers
B.
Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the Networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the master template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
B.
Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the Networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the master template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
Answers
C.
Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
C.
Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
Answers
D.
Use Fn:ImportValue intrinsic functions in the Parameters section of the master template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
D.
Use Fn:ImportValue intrinsic functions in the Parameters section of the master template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
Answers
Suggested answer: A

You currently run your infrastructure on Amazon EC2 instances behind an Auto Scaling group. All logs for you application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug. Which technique should you use to make sure you are able to review your logs after your instances have shut down?

A.
Configure the ephemeral policies on your Auto Scaling group to back up on terminate.
A.
Configure the ephemeral policies on your Auto Scaling group to back up on terminate.
Answers
B.
Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate.
B.
Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate.
Answers
C.
Install the CloudWatch Logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs.
C.
Install the CloudWatch Logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs.
Answers
D.
Install the CloudWatch monitoring agent on your AMI, and set up new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive.
D.
Install the CloudWatch monitoring agent on your AMI, and set up new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive.
Answers
E.
Install the CloudWatch monitoring agent on your AMI, Update your Auto Scaling policy to enable automated CloudWatch Log copy.
E.
Install the CloudWatch monitoring agent on your AMI, Update your Auto Scaling policy to enable automated CloudWatch Log copy.
Answers
Suggested answer: C

A DevOps Engineer wants to prevent Developers from pushing updates directly to the company’s master branch in AWS CodeCommit. These updates should be approved before they are merged. Which solution will meet these requirements?

A.
Configure an IAM role for the Developers with access to CodeCommit and an explicit deny for write actions when the reference is the master. Allow Developers to use feature branches and create a pull request when a feature is complete.Allow an approver to use CodeCommit to view the changes and approve the pull requests.
A.
Configure an IAM role for the Developers with access to CodeCommit and an explicit deny for write actions when the reference is the master. Allow Developers to use feature branches and create a pull request when a feature is complete.Allow an approver to use CodeCommit to view the changes and approve the pull requests.
Answers
B.
Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and dynamically modify the IAM role to allow merging the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
B.
Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and dynamically modify the IAM role to allow merging the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
Answers
C.
Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and issue a new AWS Security Token Service (STS) token allowing a one-time API call to merge the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
C.
Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and issue a new AWS Security Token Service (STS) token allowing a one-time API call to merge the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
Answers
D.
Configure an IAM role for the Developers with access to CodeCommit and attach an access policy to the CodeCommit repository that denies the Developers role access when the reference is master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
D.
Configure an IAM role for the Developers with access to CodeCommit and attach an access policy to the CodeCommit repository that denies the Developers role access when the reference is master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
Answers
Suggested answer: A

You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements:

- All log entries must be retained by the system, even during unplanned instance failure.

- The customer insight team requires immediate access to the logs from the past seven days.

- The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available. How would you meet these requirements in a cost-effective manner? (Choose three.)

A.
Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon 53 once an hour.
A.
Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon 53 once an hour.
Answers
B.
Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
B.
Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
Answers
C.
Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
C.
Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
Answers
D.
Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon 53 once an hour.
D.
Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon 53 once an hour.
Answers
E.
Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour.
E.
Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour.
Answers
F.
Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.
F.
Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.
Answers
Suggested answer: C, E, F
Total 557 questions
Go to page: of 56