ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 26

Question list
Search
Search

List of questions

Search

Related questions











A company’s web application will be migrated to AWS. The application is designed so that there is no server-side code required. As part of the migration, the company would like to improve the security of the application by adding HTTP response headers, following the Open Web Application Security Project (OWASP) secure headers recommendations. How can this solution be implemented to meet the security requirements using best practices?

A.
Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Then configure the static website hosting and execute a scheduled AWS Lambda function to verify, and if missing, add security headers to the metadata.
A.
Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Then configure the static website hosting and execute a scheduled AWS Lambda function to verify, and if missing, add security headers to the metadata.
Answers
B.
Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Configure the static website hosting to return the required security headers.
B.
Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Configure the static website hosting to return the required security headers.
Answers
C.
Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket, with the origin response event set to trigger a Lambda@Edge Node.js function to add in the security headers.
C.
Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket, with the origin response event set to trigger a Lambda@Edge Node.js function to add in the security headers.
Answers
D.
Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket. Set “Cache Based on Selected Request Headers” to “Whitelist,” and add the security headers into the whitelist.
D.
Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket. Set “Cache Based on Selected Request Headers” to “Whitelist,” and add the security headers into the whitelist.
Answers
Suggested answer: C

A company has a legacy application running on AWS. The application can only run on one Amazon EC2 instance at a time. Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted. The instance should be automatically restarted or relaunched if performance degrades. Which solution will satisfy these requirements?

A.
Create an Amazon CloudWatch alarm to monitor the EC2 instance. When the StatusCheckFailed system alarm is triggered, use the recover action to stop and start the instance. Use a trigger in Amazon S3 to push the metadata to the instance when it is back up and running.
A.
Create an Amazon CloudWatch alarm to monitor the EC2 instance. When the StatusCheckFailed system alarm is triggered, use the recover action to stop and start the instance. Use a trigger in Amazon S3 to push the metadata to the instance when it is back up and running.
Answers
B.
Use the auto healing feature in AWS OpsWorks to stop and start the EC2 instance. Use a lifecycle event in OpsWorks to pull the data from Amazon S3 and update it on the instance.
B.
Use the auto healing feature in AWS OpsWorks to stop and start the EC2 instance. Use a lifecycle event in OpsWorks to pull the data from Amazon S3 and update it on the instance.
Answers
C.
Use the Auto Recovery feature in Amazon EC2 to automatically stop and start the EC2 instance in case of a failure. Use a trigger in Amazon S3 to push the metadata to the instance when it is back up and running.
C.
Use the Auto Recovery feature in Amazon EC2 to automatically stop and start the EC2 instance in case of a failure. Use a trigger in Amazon S3 to push the metadata to the instance when it is back up and running.
Answers
D.
Use AWS CloudFormation to create an EC2 instance that includes the user-data property for the EC2 resource. Add a command in user-data to retrieve the application metadata from Amazon S3.
D.
Use AWS CloudFormation to create an EC2 instance that includes the user-data property for the EC2 resource. Add a command in user-data to retrieve the application metadata from Amazon S3.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

A DevOps engineer is assisting with a multi-Region disaster recovery solution for a new application. The application consists of Amazon EC2 instances running in an Auto Scaling group and an Amazon Aurora MySQL DB cluster. The application must be available with an RTO of 120 minutes and an RPO of 60 minutes.

What is the MOST cost-effective way to meet these requirements?

A.
Launch an Aurora DB cluster as an Aurora Replica in a different Region. Create an AWS CloudFormation template for all compute resources and create a stack in two Regions. Write a script that promotes the Aurora Replica to the primary instance in the event of a failure.
A.
Launch an Aurora DB cluster as an Aurora Replica in a different Region. Create an AWS CloudFormation template for all compute resources and create a stack in two Regions. Write a script that promotes the Aurora Replica to the primary instance in the event of a failure.
Answers
B.
Launch an Aurora DB cluster as an Aurora Replica in a different Region and configure automatic cross-Region failover. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Write a script that updates the CloudFormation stack in the disaster recovery Region to increase the number of instances.
B.
Launch an Aurora DB cluster as an Aurora Replica in a different Region and configure automatic cross-Region failover. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Write a script that updates the CloudFormation stack in the disaster recovery Region to increase the number of instances.
Answers
C.
Use AWS Lambda to create and copy a snapshot of the Aurora DB cluster to the destination Region hourly. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Restore the Aurora DB cluster from a snapshot and update the Auto Scaling group to start launching instances.
C.
Use AWS Lambda to create and copy a snapshot of the Aurora DB cluster to the destination Region hourly. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Restore the Aurora DB cluster from a snapshot and update the Auto Scaling group to start launching instances.
Answers
D.
Configure Amazon DynamoDB cross-Region replication. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Write a script that will update the CloudFormation stack in the disaster recovery Region and promote the DynamoDB replica to the primary instance in the event of a failure.
D.
Configure Amazon DynamoDB cross-Region replication. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Regions. Write a script that will update the CloudFormation stack in the disaster recovery Region and promote the DynamoDB replica to the primary instance in the event of a failure.
Answers
Suggested answer: A

A DevOps engineer is developing an application for a company. The application needs to persist files to Amazon S3. The application needs to upload files with different security classifications that the company defines. These classifications include confidential, private, and public. Files that have a confidential classification must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations. The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user’s ID to each confidential object that is uploaded to Amazon S3. Which set of additional steps must the DevOps engineer take to meet the company’s requirements?

A.
Modify the S3 bucket’s ACL to grant bucket-owner-read access to the uploading user’s IAM role. Create an IAM policy that grants s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
A.
Modify the S3 bucket’s ACL to grant bucket-owner-read access to the uploading user’s IAM role. Create an IAM policy that grants s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Answers
B.
Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
B.
Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Answers
C.
Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
C.
Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Answers
D.
Modify the S3 bucket’s ACL to grant authenticated-read access when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
D.
Modify the S3 bucket’s ACL to grant authenticated-read access when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Answers
Suggested answer: B

A company is beginning to move to the AWS Cloud. Internal customers are classified into two groups according to their AWS skills: beginners and experts. The DevOps Engineer needs to build a solution to allow beginners to deploy a restricted set of AWS architecture blueprints expresses as AWS CloudFormation templates. Deployment should only be possible on predetermined Virtual Private Clouds (VPCs). However, expert users should be able to deploy blueprints without constraints. Experts should also be able to access other AWS services, as needed. How can the Engineer implement a solution to meet these requirements with the LEAST amount of overhead?

A.
Apply constraints to the parameters in the templates, limiting the VPCs available for deployments. Store the templates on Amazon S3. Create an IAM group for beginners and give them access to the templates and CloudFormation. Create a separate group for experts, giving them access to the templates, CloudFormation, and other AWS services.
A.
Apply constraints to the parameters in the templates, limiting the VPCs available for deployments. Store the templates on Amazon S3. Create an IAM group for beginners and give them access to the templates and CloudFormation. Create a separate group for experts, giving them access to the templates, CloudFormation, and other AWS services.
Answers
B.
Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Apply template constraints to the products with rules limiting VPCs available for deployments. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the templates, CloudFormation, and other AWS services.
B.
Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Apply template constraints to the products with rules limiting VPCs available for deployments. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the templates, CloudFormation, and other AWS services.
Answers
C.
Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Create an IAM role restricting VPCs available for creation of AWS resources. Apply a launch constraint to the products using this role. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the portfolio and other AWS services.
C.
Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Create an IAM role restricting VPCs available for creation of AWS resources. Apply a launch constraint to the products using this role. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the portfolio and other AWS services.
Answers
D.
Create two templates for each architecture blueprint where only one of them limits the VPC available for deployments. Store the templates in Amazon DynamoDB. Create an IAM group for beginners giving them access to the constrained templates and CloudFormation. Create a separate group for experts giving them access to the unconstrained templates, CloudFormation, and other AWS services.
D.
Create two templates for each architecture blueprint where only one of them limits the VPC available for deployments. Store the templates in Amazon DynamoDB. Create an IAM group for beginners giving them access to the constrained templates and CloudFormation. Create a separate group for experts giving them access to the unconstrained templates, CloudFormation, and other AWS services.
Answers
Suggested answer: B

Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?

A.
Use CloudTrail Log File Integrity Validation.
A.
Use CloudTrail Log File Integrity Validation.
Answers
B.
Use AWS Config SNS Subscriptions and process events in real time.
B.
Use AWS Config SNS Subscriptions and process events in real time.
Answers
C.
Use CloudTrail backed up to AWS S3 and Glacier.
C.
Use CloudTrail backed up to AWS S3 and Glacier.
Answers
D.
Use AWS Config Timeline forensics.
D.
Use AWS Config Timeline forensics.
Answers
Suggested answer: A

Explanation:

You must use CloudTrail Log File Validation (default or custom implementation), as any other tracking method is subject to forgery in the event of a full account compromise by sophisticated enough hackers. Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time.

Reference: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

A company has deployed several applications globally. Recently, Security Auditors found that few Amazon EC2 instances were launched without Amazon EBS disk encryption. The Auditors have requested a report detailing all EBS volumes that were not encrypted in multiple AWS accounts and regions. They also want to be notified whenever this occurs in future. How can this be automated with the LEAST amount of operational overhead?

A.
Create an AWS Lambda function to set up an AWS Config rule on all the target accounts. Use AWS Config aggregators to collect data from multiple accounts and regions. Export the aggregated report to an Amazon S3 bucket and use Amazon SNS to deliver the notifications.
A.
Create an AWS Lambda function to set up an AWS Config rule on all the target accounts. Use AWS Config aggregators to collect data from multiple accounts and regions. Export the aggregated report to an Amazon S3 bucket and use Amazon SNS to deliver the notifications.
Answers
B.
Set up AWS CloudTrail to deliver all events to an Amazon S3 bucket in a centralized account. Use the S3 event notification feature to invoke an AWS Lambda function to parse AWS CloudTrail logs whenever logs are delivered to the S3 bucket. Publish the output to an Amazon SNS topic using the same Lambda function.
B.
Set up AWS CloudTrail to deliver all events to an Amazon S3 bucket in a centralized account. Use the S3 event notification feature to invoke an AWS Lambda function to parse AWS CloudTrail logs whenever logs are delivered to the S3 bucket. Publish the output to an Amazon SNS topic using the same Lambda function.
Answers
C.
Create an AWS CloudFormation template that adds an AWS Config managed rule for EBS encryption. Use a CloudFormation stack set to deploy the template across all accounts and regions. Store consolidated evaluation results from config rules in Amazon S3. Send a notification using Amazon SNS when non-compliant resources are detected.
C.
Create an AWS CloudFormation template that adds an AWS Config managed rule for EBS encryption. Use a CloudFormation stack set to deploy the template across all accounts and regions. Store consolidated evaluation results from config rules in Amazon S3. Send a notification using Amazon SNS when non-compliant resources are detected.
Answers
D.
Using AWS CLI, run a script periodically that invokes the aws ec2 describe-volumes query with a JMESPATH query filter. Then, write the output to an Amazon S3 bucket. Set up an S3 event notification to send events using Amazon SNS when new data is written to the S3 bucket.
D.
Using AWS CLI, run a script periodically that invokes the aws ec2 describe-volumes query with a JMESPATH query filter. Then, write the output to an Amazon S3 bucket. Set up an S3 event notification to send events using Amazon SNS when new data is written to the S3 bucket.
Answers
Suggested answer: C

A software company wants to automate the build process for a project where the code is stored in GitHub. When the repository is updated, source code should be compiled, tested, and pushed to Amazon S3. Which combination of steps would address these requirements? (Choose three.)

A.
Add a buildspec.yml file to the source code with build instructions.
A.
Add a buildspec.yml file to the source code with build instructions.
Answers
B.
Configure a GitHub webhook to trigger a build every time a code change is pushed to the repository.
B.
Configure a GitHub webhook to trigger a build every time a code change is pushed to the repository.
Answers
C.
Create an AWS CodeBuild project with GitHub as the source repository.
C.
Create an AWS CodeBuild project with GitHub as the source repository.
Answers
D.
Create an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform.
D.
Create an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform.
Answers
E.
Create an AWS OpsWorks deployment with the install dependencies command.
E.
Create an AWS OpsWorks deployment with the install dependencies command.
Answers
F.
Provision an Amazon EC2 instance to perform the build.
F.
Provision an Amazon EC2 instance to perform the build.
Answers
Suggested answer: A, B, C

A company runs an application consisting of an AWS CodeDeploy deployment group that uses Auto Scaling and an Application Load Balancer. The application deployments are automated using AWS CodePipeline, which consists of AWS CodeCommit as the source and AWS CodeDeploy as the deployment provider.

After a recent successful deployment, the application experienced an outage for several minutes until the deployment was manually rolled back. A DevOps engineer verified that the pipeline was successful and did not indicate any errors, but found that the code caused the application to become unresponsive after several hours.

Which actions will help to prevent future downtime in similar situations? (Choose two.)

A.
Configure a TCP health check for the Auto Scaling target group on a listening port of the application.
A.
Configure a TCP health check for the Auto Scaling target group on a listening port of the application.
Answers
B.
Configure an HTTP or HTTPS health check for the Auto Scaling target group to check a specific application path.
B.
Configure an HTTP or HTTPS health check for the Auto Scaling target group to check a specific application path.
Answers
C.
Create a script to test the application health and execute the script during the BeforeInstall lifecycle hook in the CodeDeploy appspec.yml file.
C.
Create a script to test the application health and execute the script during the BeforeInstall lifecycle hook in the CodeDeploy appspec.yml file.
Answers
D.
Update the CodeDeploy deployment group to roll back automatically to the previous version if the deployment fails.
D.
Update the CodeDeploy deployment group to roll back automatically to the previous version if the deployment fails.
Answers
E.
Update the CodeDeploy deployment group to roll back based on a custom Amazon CloudWatch alarm using an application status metric.
E.
Update the CodeDeploy deployment group to roll back based on a custom Amazon CloudWatch alarm using an application status metric.
Answers
Suggested answer: C, E

You have been asked to use your departments existing continuous Integration (CI) tool to test a three-tier web architecture defined In an AWS CloudFormation template. The tool already supports AWS APIs and can launch new AWS CloudFormation stacks after polling version control. The CI tool reports on the success of the AWS CloudFormation stack creation by using the Describe Stacks API to look for the CREATE COMPLETE status. The architecture tiers defined in the template consist of:

- One load balancer

- Five Amazon EC2 instances running the web application

- One multi-AZ Amazon ROS instance

How would you implement this? (Choose two.)

A.
Define a WaitCondition and a WaitConditionHandle for the output of a UserData command that does sanity checking of the application's post-install state.
A.
Define a WaitCondition and a WaitConditionHandle for the output of a UserData command that does sanity checking of the application's post-install state.
Answers
B.
Define a CustomResource and write a script that runs architecture-level Integration tests through the load balancer to the application and database for the state of multiple tiers.
B.
Define a CustomResource and write a script that runs architecture-level Integration tests through the load balancer to the application and database for the state of multiple tiers.
Answers
C.
Define a WaitCondition and use a WaitConditionHandle that leverages the AWS SDK to run the DescribeStacks API call until the CREATE COMPLETE status is returned.
C.
Define a WaitCondition and use a WaitConditionHandle that leverages the AWS SDK to run the DescribeStacks API call until the CREATE COMPLETE status is returned.
Answers
D.
Define a CustomResource that leverages the AWS SDK to run the DescribeStacks API call until the 'CREATE COMPLETE status is returned.
D.
Define a CustomResource that leverages the AWS SDK to run the DescribeStacks API call until the 'CREATE COMPLETE status is returned.
Answers
E.
Define a UserDataHandle for the output of a UserData command that does sanity checking of the application's post-install state and runs integration tests on the state of multiple tiers through the load balancer to the application.
E.
Define a UserDataHandle for the output of a UserData command that does sanity checking of the application's post-install state and runs integration tests on the state of multiple tiers through the load balancer to the application.
Answers
F.
Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application's post-install state.
F.
Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application's post-install state.
Answers
Suggested answer: C, E
Total 557 questions
Go to page: of 56