ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











A company has multiple development teams sharing one AWS account. The development team’s manager wants to be able to automatically stop Amazon EC2 instances and receive notifications if resources are idle and not tagged as production resources.

Which solution will meet these requirements?

A.
Use a scheduled Amazon CloudWatch Events rule to filter for Amazon EC2 instance status checks and identify idle EC2 instances. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
A.
Use a scheduled Amazon CloudWatch Events rule to filter for Amazon EC2 instance status checks and identify idle EC2 instances. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
Answers
B.
Use a scheduled Amazon CloudWatch Events rule to filter AWS Systems Manager events and identify idle EC2 instances and resources. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
B.
Use a scheduled Amazon CloudWatch Events rule to filter AWS Systems Manager events and identify idle EC2 instances and resources. Use the CloudWatch Events rule to target an AWS Lambda function to stop non-production instances and send notifications.
Answers
C.
Use a scheduled Amazon CloudWatch Events rule to target a custom AWS Lambda function that runs AWS Trusted Advisor checks. Create a second CloudWatch Events rule to filter events from Trusted Advisor to trigger a Lambda function to stop idle non-production instances and send notifications.
C.
Use a scheduled Amazon CloudWatch Events rule to target a custom AWS Lambda function that runs AWS Trusted Advisor checks. Create a second CloudWatch Events rule to filter events from Trusted Advisor to trigger a Lambda function to stop idle non-production instances and send notifications.
Answers
D.
Use a scheduled Amazon CloudWatch Events rule to target Amazon Inspector events for idle EC2 instances. Use the CloudWatch Events rule to target the AWS Lambda function to stop non-production instances and send notifications.
D.
Use a scheduled Amazon CloudWatch Events rule to target Amazon Inspector events for idle EC2 instances. Use the CloudWatch Events rule to target the AWS Lambda function to stop non-production instances and send notifications.
Answers
Suggested answer: C

A development team manually builds an artifact locally and then places it in an Amazon S3 bucket. The application has a local cache that must be cleared when a deployment occurs. The team executes a command to do this, downloads the artifact from Amazon S3, and unzips the artifact to complete the deployment.

A DevOps team wants to migrate to a CI/CD process and build in checks to stop and roll back the deployment when a failure occurs. This requires the team to track the progression of the deployment. Which combination of actions will accomplish this? (Choose three.)

A.
Allow developers to check the code into a code repository. Using Amazon CloudWatch Events, on every pull into master, trigger an AWS Lambda function to build the artifact and store it in Amazon S3.
A.
Allow developers to check the code into a code repository. Using Amazon CloudWatch Events, on every pull into master, trigger an AWS Lambda function to build the artifact and store it in Amazon S3.
Answers
B.
Create a custom script to clear the cache. Specify the script in the BeforeInstall lifecycle hook in the AppSpec file.
B.
Create a custom script to clear the cache. Specify the script in the BeforeInstall lifecycle hook in the AppSpec file.
Answers
C.
Create user data for each Amazon EC2 instance that contains the clear cache script. Once deployed, test the application. If it is not successful, deploy it again.
C.
Create user data for each Amazon EC2 instance that contains the clear cache script. Once deployed, test the application. If it is not successful, deploy it again.
Answers
D.
Set up AWS CodePipeline to deploy the application. Allow developers to check the code into a code repository as a source for the pipeline.
D.
Set up AWS CodePipeline to deploy the application. Allow developers to check the code into a code repository as a source for the pipeline.
Answers
E.
Use AWS CodeBuild to build the artifact and place it in Amazon S3. Use AWS CodeDeploy to deploy the artifact to Amazon EC2 instances.
E.
Use AWS CodeBuild to build the artifact and place it in Amazon S3. Use AWS CodeDeploy to deploy the artifact to Amazon EC2 instances.
Answers
F.
Use AWS Systems Manager to fetch the artifact from Amazon S3 and deploy it to all the instances.
F.
Use AWS Systems Manager to fetch the artifact from Amazon S3 and deploy it to all the instances.
Answers
Suggested answer: C, E, F

A web application has been deployed using an AWS Elastic Beanstalk application. The Application Developers are concerned that they are seeing high latency in two different areas of the application:

HTTP client requests to a third-party API

MySQL client library queries to an Amazon RDS database

A DevOps Engineer must gather trace data to diagnose the issues.

Which steps will gather the trace information with the LEAST amount of changes and performance impacts to the application?

A.
Add additional logging to the application code. Use the Amazon CloudWatch agent to stream the application logs into Amazon Elasticsearch Service. Query the log data in Amazon ES.
A.
Add additional logging to the application code. Use the Amazon CloudWatch agent to stream the application logs into Amazon Elasticsearch Service. Query the log data in Amazon ES.
Answers
B.
Instrument the application to use the AWS X-Ray SDK. Post trace data to an Amazon Elasticsearch Service cluster. Query the trace data for calls to the HTTP client and the MySQL client.
B.
Instrument the application to use the AWS X-Ray SDK. Post trace data to an Amazon Elasticsearch Service cluster. Query the trace data for calls to the HTTP client and the MySQL client.
Answers
C.
On the AWS Elastic Beanstalk management page for the application, enable the AWS X-Ray daemon. View the trace data in the X-Ray console.
C.
On the AWS Elastic Beanstalk management page for the application, enable the AWS X-Ray daemon. View the trace data in the X-Ray console.
Answers
D.
Instrument the application using the AWS X-Ray SDK. On the AWS Elastic Beanstalk management page for the application, enable the X-Ray daemon. View the trace data in the X-Ray console.
D.
Instrument the application using the AWS X-Ray SDK. On the AWS Elastic Beanstalk management page for the application, enable the X-Ray daemon. View the trace data in the X-Ray console.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/xray/latest/devguide/xray-gettingstarted.html

An application is running on Amazon EC2. It has an attached IAM role that is receiving an AccessDenied error while trying to access a SecureString parameter resource in the AWS Systems Manager Parameter Store. The SecureString parameter is encrypted with a customer-managed Customer Master Key (CMK), What steps should the DevOps Engineer take to grant access to the role while granting least privilege? (Choose three.)

A.
Set ssm:GetParamter for the parameter resource in the instance role’s IAM policy.
A.
Set ssm:GetParamter for the parameter resource in the instance role’s IAM policy.
Answers
B.
Set kms:Decrypt for the instance role in the customer-managed CMK policy.
B.
Set kms:Decrypt for the instance role in the customer-managed CMK policy.
Answers
C.
Set kms:Decrypt for the customer-managed CMK resource in the role’s IAM policy.
C.
Set kms:Decrypt for the customer-managed CMK resource in the role’s IAM policy.
Answers
D.
Set ssm:DecryptParameter for the parameter resource in the instance role IAM policy.
D.
Set ssm:DecryptParameter for the parameter resource in the instance role IAM policy.
Answers
E.
Set kms:GenerateDataKey for the user on the AWS managed SSM KMS key.
E.
Set kms:GenerateDataKey for the user on the AWS managed SSM KMS key.
Answers
F.
Set kms:Decrypt for the parameter resource in the customer-managed CMK policy.
F.
Set kms:Decrypt for the parameter resource in the customer-managed CMK policy.
Answers
Suggested answer: A, B, C

Explanation:

Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-access.html

A company has an application that has predictable peak traffic times. The company wants the application instances to scale up only during the peak times. The application stores state in Amazon DynamoDB. The application environment uses a standard Node.js application stack and custom Chef recipes stored in a private Git repository. Which solution is MOST cost-effective and requires the LEAST amount of management overhead when performing rolling updates of the application environment?

A.
Create a custom AMI with the Node.js environment and application stack using Chef recipes. Use the AMI in an Auto Scaling group and set up scheduled scaling for the required times, then set up an Amazon EC2 IAM role that provides permission to access DynamoDB.
A.
Create a custom AMI with the Node.js environment and application stack using Chef recipes. Use the AMI in an Auto Scaling group and set up scheduled scaling for the required times, then set up an Amazon EC2 IAM role that provides permission to access DynamoDB.
Answers
B.
Create a Docker file that uses the Chef recipes for the application environment based on an official Node.js Docker image. Create an Amazon ECS cluster and a service for the application environment, then create a task based on this Docker image. Use scheduled scaling to scale the containers at the appropriate times and attach a task-level IAM role that provides permission to access DynamoDB.
B.
Create a Docker file that uses the Chef recipes for the application environment based on an official Node.js Docker image. Create an Amazon ECS cluster and a service for the application environment, then create a task based on this Docker image. Use scheduled scaling to scale the containers at the appropriate times and attach a task-level IAM role that provides permission to access DynamoDB.
Answers
C.
Configure AWS OpsWorks stacks and use custom Chef cookbooks. Add the Git repository information where the custom recipes are stored, and add a layer in OpsWorks for the Node.js application server. Then configure the custom recipe to deploy the application in the deploy step. Configure time-based instances and attach an Amazon EC2 IAM role that provides permission to access DynamoDB.
C.
Configure AWS OpsWorks stacks and use custom Chef cookbooks. Add the Git repository information where the custom recipes are stored, and add a layer in OpsWorks for the Node.js application server. Then configure the custom recipe to deploy the application in the deploy step. Configure time-based instances and attach an Amazon EC2 IAM role that provides permission to access DynamoDB.
Answers
D.
Configure AWS OpsWorks stacks and push the custom recipes to an Amazon S3 bucket and configure custom recipes to point to the S3 bucket. Then add an application layer type for a standard Node.js application server and configure the custom recipe to deploy the application in the deploy step from the S3 bucket. Configure time-based instances and attach an Amazon EC2 IAM role that provides permission to access DynamoDB.
D.
Configure AWS OpsWorks stacks and push the custom recipes to an Amazon S3 bucket and configure custom recipes to point to the S3 bucket. Then add an application layer type for a standard Node.js application server and configure the custom recipe to deploy the application in the deploy step from the S3 bucket. Configure time-based instances and attach an Amazon EC2 IAM role that provides permission to access DynamoDB.
Answers
Suggested answer: C

A company has developed a Node.js web application which provides REST services to store and retrieve time series data. The web application is built by the Development team on company laptops, tested locally, and manually deployed to a single on-premises server, which accesses a local MySQL database. The company is starting a trial in two weeks, during which the application will undergo frequent updates based on customer feedback. The following requirements must be met:

The team must be able to reliably build, test, and deploy new updates on a daily basis, without downtime or degraded performance. The application must be able to scale to meet an unpredictable number of concurrent users during the trial. Which action will allow the team to quickly meet these objectives?

A.
Create two Amazon Lightsail virtual private servers for Node.js; one for test and one for production. Build the Node.js application using existing processes and upload it to the new Lightsail test server using the AWS CLI. Test the application, and if it passes all tests, upload it to the production server. During the trial, monitor the production server usage, and if needed, increase performance by upgrading the instance type.
A.
Create two Amazon Lightsail virtual private servers for Node.js; one for test and one for production. Build the Node.js application using existing processes and upload it to the new Lightsail test server using the AWS CLI. Test the application, and if it passes all tests, upload it to the production server. During the trial, monitor the production server usage, and if needed, increase performance by upgrading the instance type.
Answers
B.
Develop an AWS CloudFormation template to create an Application Load Balancer and two Amazon EC2 instances with Amazon EBS (SSD) volumes in an Auto Scaling group with rolling updates enabled. Use AWS CodeBuild to build and test the Node.js application and store it in an Amazon S3 bucket. Use user-data scripts to install the application and the MySQL database on each EC2 instance. Update the stack to deploy new application versions.
B.
Develop an AWS CloudFormation template to create an Application Load Balancer and two Amazon EC2 instances with Amazon EBS (SSD) volumes in an Auto Scaling group with rolling updates enabled. Use AWS CodeBuild to build and test the Node.js application and store it in an Amazon S3 bucket. Use user-data scripts to install the application and the MySQL database on each EC2 instance. Update the stack to deploy new application versions.
Answers
C.
Configure AWS Elastic Beanstalk to automatically build the application using AWS CodeBuild and to deploy it to a test environment that is configured to support auto scaling. Create a second Elastic Beanstalk environment for production. Use Amazon RDS to store data. When new versions of the applications have passed all tests, use Elastic Beanstalk ‘swap cname’ to promote the test environment to production.
C.
Configure AWS Elastic Beanstalk to automatically build the application using AWS CodeBuild and to deploy it to a test environment that is configured to support auto scaling. Create a second Elastic Beanstalk environment for production. Use Amazon RDS to store data. When new versions of the applications have passed all tests, use Elastic Beanstalk ‘swap cname’ to promote the test environment to production.
Answers
D.
Modify the application to use Amazon DynamoDB instead of a local MySQL database. Use AWS OpsWorks to create a stack for the application with a DynamoDB layer, an Application Load Balancer layer, and an Amazon EC2 instance layer.Use a Chef recipe to build the application and a Chef recipe to deploy the application to the EC2 instance layer. Use custom health checks to run unit tests on each instance with rollback on failure.
D.
Modify the application to use Amazon DynamoDB instead of a local MySQL database. Use AWS OpsWorks to create a stack for the application with a DynamoDB layer, an Application Load Balancer layer, and an Amazon EC2 instance layer.Use a Chef recipe to build the application and a Chef recipe to deploy the application to the EC2 instance layer. Use custom health checks to run unit tests on each instance with rollback on failure.
Answers
Suggested answer: C

A company wants to use Amazon DynamoDB for maintaining metadata on its forums. See the sample data set in the image below.

A DevOps Engineer is required to define the table schema with the partition key, the sort key, the local secondary index, projected attributes, and fetch operations. The schema should support the following example searches using the least provisioned read capacity units to minimize cost.

-Search within ForumName for items where the subject starts with ‘a’.

-Search forums within the given LastPostDateTime time frame.

-Return the thread value where LastPostDateTime is within the last three months.

Which schema meets the requirements?

A.
Use Subject as the primary key and ForumName as the sort key. Have LSI with LastPostDateTime as the sort key and fetch operations for thread.
A.
Use Subject as the primary key and ForumName as the sort key. Have LSI with LastPostDateTime as the sort key and fetch operations for thread.
Answers
B.
Use ForumName as the primary key and Subject as the sort key. Have LSI with LastPostDateTime as the sort key and the projected attribute thread.
B.
Use ForumName as the primary key and Subject as the sort key. Have LSI with LastPostDateTime as the sort key and the projected attribute thread.
Answers
C.
Use ForumName as the primary key and Subject as the sort key. Have LSI with Thread as the sort key and the projected attribute LastPostDateTime.
C.
Use ForumName as the primary key and Subject as the sort key. Have LSI with Thread as the sort key and the projected attribute LastPostDateTime.
Answers
D.
Use Subject as the primary key and ForumName as the sort key. Have LSI with Thread as the sort key and fetch operations for LastPostDateTime.
D.
Use Subject as the primary key and ForumName as the sort key. Have LSI with Thread as the sort key and fetch operations for LastPostDateTime.
Answers
Suggested answer: A

Some of your EC2 instances are configured to use a Proxy. Can you use Amazon Inspector for regular assessment of instances behind proxy?

A.
Only Windows-based systems are supported as Linux-based systems use custom configurations that are not supported by AWS Agent in the current release.
A.
Only Windows-based systems are supported as Linux-based systems use custom configurations that are not supported by AWS Agent in the current release.
Answers
B.
Only Linux-based systems are supported, and AWS agent supports HTTPS proxy on these systems.
B.
Only Linux-based systems are supported, and AWS agent supports HTTPS proxy on these systems.
Answers
C.
No, AWS Agent does NOT support proxy environments.
C.
No, AWS Agent does NOT support proxy environments.
Answers
D.
Yes, AWS Agent supports proxy environments on both Linux-based and Windows-based systems.
D.
Yes, AWS Agent supports proxy environments on both Linux-based and Windows-based systems.
Answers
Suggested answer: D

Explanation:

The AWS agent supports proxy environments. For Linux instances, Inspector supports HTTPS Proxy, and for Windowsinstances, it supports WinHTTP proxy.

Reference:

https://docs.aws.amazon.com/inspector/latest/userguide/inspector_agents.html

A company is hosting a web application in an AWS Region. For disaster recovery purposes, a second region is being used as a standby. Disaster recovery requirements state that session data must be replicated between regions in near-real time and 1% of requests should route to the secondary region to continuously verify system functionality. Additionally, if there is a disruption in service in the main region, traffic should be automatically routed to the secondary region, and the secondary region must be able to scale up to handle all traffic. How should a DevOps Engineer meet these requirements?

A.
In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.
A.
In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.
Answers
B.
In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions.
B.
In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions.
Answers
C.
In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly.
C.
In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly.
Answers
D.
In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution.
D.
In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution.
Answers
Suggested answer: A

What storage driver does Docker generally recommend that you use if it is available?

A.
zfs
A.
zfs
Answers
B.
btrfs
B.
btrfs
Answers
C.
aufs
C.
aufs
Answers
D.
overlay
D.
overlay
Answers
Suggested answer: C

Explanation:

After you have read the storage driver overview, the next step is to choose the best storage driver for your workloads. In making this decision, there are three high-level factors to consider: If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which storage driver to use if no storage driver is explicitly configured, assuming that the prerequisites for that storage driver are met: If aufs is available, default to it, because it is the oldest storage driver. However, it is not universally available.

Reference:

https://docs.docker.com/engine/userguide/storagedriver/selectadriver/

Total 557 questions
Go to page: of 56