ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 30

Question list
Search
Search

List of questions

Search

Related questions











A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto Scaling group are failing to respond to user requests. The EC2 instances are also failing target group HTTP health checks. Upon inspection, the engineer notices the application process was not running in any EC2 instances. There are a significant number of out of memory messages in the system logs. The engineer needs to improve the resilience of the application to cope with a potential application memory leak. Monitoring and notifications should be enabled to alert when there is an issue. Which combination of actions will meet these requirements? (Choose two.)

A.
Change the Auto Scaling configuration to replace the instances when they fail the load balancer’s health checks.
A.
Change the Auto Scaling configuration to replace the instances when they fail the load balancer’s health checks.
Answers
B.
Change the target group health check HealthCheckIntervalSeconds parameter to reduce the interval between health checks.
B.
Change the target group health check HealthCheckIntervalSeconds parameter to reduce the interval between health checks.
Answers
C.
Change the target group health checks from HTTP to TCP to check if the port where the application is listening isreachable.
C.
Change the target group health checks from HTTP to TCP to check if the port where the application is listening isreachable.
Answers
D.
Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group. Create an alarm when the memory utilization is high. Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off.
D.
Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group. Create an alarm when the memory utilization is high. Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off.
Answers
E.
Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group. Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
E.
Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group. Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
Answers
Suggested answer: D, E

How does Amazon RDS multi Availability Zone model work?

A.
A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
A.
A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
Answers
B.
A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
B.
A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
Answers
C.
A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
C.
A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
Answers
D.
A second, standby database is deployed and maintained in a different region from master using synchronous replication.
D.
A second, standby database is deployed and maintained in a different region from master using synchronous replication.
Answers
Suggested answer: A

Explanation:

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

You have been tasked with deploying a scalable distributed system using AWS OpsWorks. Your distributed system is required to scale on demand. As it is distributed, each node must hold a configuration file that includes the hostnames of the other instances within the layer.

How should you configure AWS OpsWorks to manage scaling this application dynamically?

A.
Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer.
A.
Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer.
Answers
B.
Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instances. Configure your base AMI to execute this script on Operating System startup.
B.
Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instances. Configure your base AMI to execute this script on Operating System startup.
Answers
C.
Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to execute when instances are launched.
C.
Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to execute when instances are launched.
Answers
D.
Configure your AWS OpsWorks layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.
D.
Configure your AWS OpsWorks layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.
Answers
Suggested answer: A

What flag would you use to limit a Docker container's memory usage to 128 megabytes?

A.
-memory 128m
A.
-memory 128m
Answers
B.
-m 128m
B.
-m 128m
Answers
C.
--memory-reservation 128m
C.
--memory-reservation 128m
Answers
D.
-m 128MB
D.
-m 128MB
Answers
Suggested answer: B

Explanation:

Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set. Most of these options take a positive integer, followed by a suffix of b, k, m, g, to indicate bytes, kilobytes, megabytes, or gigabytes.

Option -m or --memory=

Description The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 4m (4 megabyte).

Reference: https://docs.docker.com/engine/admin/resource_constraints/#memory

You need to create an audit log of all changes to customer banking data. You use DynamoDB to store this customer banking data. It is important not to lose any information due to server failures. What is an elegant way to accomplish this?

A.
Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to AWS CloudWatch Logs, removing sensitive information before logging.
A.
Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to AWS CloudWatch Logs, removing sensitive information before logging.
Answers
B.
Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application server, removing sensitive information before logging. Periodically rotate these log files into S3.
B.
Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application server, removing sensitive information before logging. Periodically rotate these log files into S3.
Answers
C.
Use a DynamoDB StreamSpecification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to S3.
C.
Use a DynamoDB StreamSpecification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to S3.
Answers
D.
Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application server, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.
D.
Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application server, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.
Answers
Suggested answer: A

Explanation:

All suggested periodic options are sensitive to server failure during or between periodic flushes. Streaming to Lambda and then logging to CloudWatch Logs will make the system resilient to instance and Availability Zone failures.

Reference: http://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html

A developer is building an application that must allow users to upload images to an Amazon S3 bucket. Users need to be able to sign in to the application using Facebook to upload images. How can these requirements be met?

A.
Store a user’s Facebook user name and password in an Amazon DymanoDB table. Authenticate against those credentials the next time the user tries to log in.
A.
Store a user’s Facebook user name and password in an Amazon DymanoDB table. Authenticate against those credentials the next time the user tries to log in.
Answers
B.
Create an Amazon Cognito identity pool using Facebook as the identity provider. Obtain temporary AWS credentials so a user can access Amazon S3.
B.
Create an Amazon Cognito identity pool using Facebook as the identity provider. Obtain temporary AWS credentials so a user can access Amazon S3.
Answers
C.
Create multiple AWS IAM users. Set the email and password to be the same as each user’s Facebook login credentials.
C.
Create multiple AWS IAM users. Set the email and password to be the same as each user’s Facebook login credentials.
Answers
D.
Create a new Facebook account and store its login credentials in an S3 bucket. Share that S3 bucket with a user. The user will log in to the application using those retrieved credentials.
D.
Create a new Facebook account and store its login credentials in an S3 bucket. Share that S3 bucket with a user. The user will log in to the application using those retrieved credentials.
Answers
Suggested answer: B

Explanation:

Reference: https://aws.amazon.com/blogs/mobile/store-your-photos-in-the-cloud-using-amazon-s3/

A DevOps Engineer is reviewing a system that uses Amazon EC2 instances in an Auto Scaling group. This system uses a configuration management tool that runs locally on each EC2 instance. Because of the volatility of the application load, new instances must be fully functional within 3 minutes of entering a running state. Current setup tasks include:

Installing the configuration management agent – 2 minutes

Installing the application framework – 15 minutes

Copying configuration data from Amazon S3 – 2 minutes

Running the configuration management agent to configure instances – 1 minute Deploying the application code from Amazon S3 – 2 minutes How should the Engineer set up the system so it meets the launch time requirement?

A.
Trigger an AWS Lambda function from an Amazon CloudWatch Events rule when a new EC2 instance launches. Have the function install the configuration management agent and the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
A.
Trigger an AWS Lambda function from an Amazon CloudWatch Events rule when a new EC2 instance launches. Have the function install the configuration management agent and the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
Answers
B.
Write a bootstrap script to install the configuration management agent, install the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
B.
Write a bootstrap script to install the configuration management agent, install the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
Answers
C.
Build a custom AMI that includes the configuration management agent and application framework. Write a bootstrap script to pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
C.
Build a custom AMI that includes the configuration management agent and application framework. Write a bootstrap script to pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
Answers
D.
Build a custom AMI that includes the configuration management agent, application framework, and configuration data. Write a bootstrap script to run the agent to configure the instance and deploy the application from Amazon S3.
D.
Build a custom AMI that includes the configuration management agent, application framework, and configuration data. Write a bootstrap script to run the agent to configure the instance and deploy the application from Amazon S3.
Answers
Suggested answer: B

A company runs an application on one Amazon EC2 instance. Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted. The instance must restart or relaunch automatically if the instance becomes unresponsive.

Which solution will meet these requirements?

A.
Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
A.
Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
Answers
B.
Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.
B.
Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.
Answers
C.
Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
C.
Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
Answers
D.
Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.
D.
Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.
Answers
Suggested answer: B

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/opsworks-unexpected-start-instance/

Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this?

A.
Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.
A.
Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.
Answers
B.
Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic.
B.
Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic.
Answers
C.
Use AWS IAM credential reports to deliver a CSV of all uses of IAM User Tokens over time to the CTO.
C.
Use AWS IAM credential reports to deliver a CSV of all uses of IAM User Tokens over time to the CTO.
Answers
D.
Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB table. Generate reports based on the contents of this table.
D.
Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB table. Generate reports based on the contents of this table.
Answers
Suggested answer: A

Explanation:

This is the ideal use case for AWS CloudTrail. CloudTrail provides visibility into user activity by recording API calls made on your account. CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards.

Reference:

https://aws.amazon.com/cloudtrail/faqs/

During metric analysis, your team has determined that the company's website is experiencing response times during peak hours that are higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows. How can you improve your Auto Scaling policy to reduce this high response time? (Choose two.)

A.
Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have better fine-grain insight.
A.
Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have better fine-grain insight.
Answers
B.
Increase your Auto Scaling group's number of max servers.
B.
Increase your Auto Scaling group's number of max servers.
Answers
C.
Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer.
C.
Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer.
Answers
D.
Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.
D.
Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.
Answers
E.
Update the CloudWatch metric used for your Auto Scaling policy, and enable sub-minute granularity to allow auto scaling to trigger faster.
E.
Update the CloudWatch metric used for your Auto Scaling policy, and enable sub-minute granularity to allow auto scaling to trigger faster.
Answers
Suggested answer: B, D
Total 557 questions
Go to page: of 56