ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 20

Question list
Search
Search

List of questions

Search

Related questions











A DevOps Engineer has a single Amazon DynamoDB table that receives shipping orders and tracks inventory. The Engineer has three AWS Lambda functions reading from a DymamoDB stream on that table. The Lambda functions perform various functions such as doing an item count, moving items to Amazon Kinesis Data Firehose, monitoring inventory levels, and creating vendor orders when parts are low. While reviewing logs, the Engineer notices the Lambda functions occasionally fail under increased load, receiving a stream throttling error. Which is the MOST cost-effective solution that requires the LEAST amount of operational management?

A.
Use AWS Glue integration to ingest the DynamoDB stream, then migrate the Lambda code to an AWS Fargate task.
A.
Use AWS Glue integration to ingest the DynamoDB stream, then migrate the Lambda code to an AWS Fargate task.
Answers
B.
Use Amazon Kinesis streams instead of DynamoDB streams, then use Kinesis analytics to trigger the Lambda functions.
B.
Use Amazon Kinesis streams instead of DynamoDB streams, then use Kinesis analytics to trigger the Lambda functions.
Answers
C.
Create a fourth Lambda function and configure it to be the only Lambda reading from the stream. Then use this Lambda function to pass the payload to the other three Lambda functions.
C.
Create a fourth Lambda function and configure it to be the only Lambda reading from the stream. Then use this Lambda function to pass the payload to the other three Lambda functions.
Answers
D.
Have the Lambda functions query the table directly and disable DynamoDB streams. Then have the Lambda functions query from a global secondary index.
D.
Have the Lambda functions query the table directly and disable DynamoDB streams. Then have the Lambda functions query from a global secondary index.
Answers
Suggested answer: C

In which Docker Swarm model does the swarm manager distribute a specific number of replica tasks among the nodes based upon the scale you set in the desired state?

A.
distributed services
A.
distributed services
Answers
B.
scaled services
B.
scaled services
Answers
C.
replicated services
C.
replicated services
Answers
D.
global services
D.
global services
Answers
Suggested answer: C

Explanation:

A service is the definition of the tasks to execute on the worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When you create a service, you specify which container image to use and which commands to execute inside running containers. In the replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state. For global services, the swarm runs one task for the service on every available node in the cluster. A task carries a Docker container and the commands to run inside the container. It is the atomic scheduling unit of swarm. Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale. Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail.

Reference: https://docs.docker.com/engine/swarm/key-concepts/#services-and-tasks

You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?

A.
Create a second master RDS instance and peer the RDS groups.
A.
Create a second master RDS instance and peer the RDS groups.
Answers
B.
Cache all the database responses on the read side with CloudFront.
B.
Cache all the database responses on the read side with CloudFront.
Answers
C.
Create read replicas for RDS since the load is mostly reads.
C.
Create read replicas for RDS since the load is mostly reads.
Answers
D.
Create a Multi-AZ RDS installs and route read traffic to standby.
D.
Create a Multi-AZ RDS installs and route read traffic to standby.
Answers
Suggested answer: C

Explanation:

The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. For more information, see Working with PostgreSQL, MySQL, and MariaDB Read Replicas.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

A web application for healthcare services runs on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. A DevOps Engineer must create a mechanism in which an EC2 instance can be taken out of production so its system logs can be analyzed for issues to quickly troubleshoot problems on the web tier. How can the Engineer accomplish this task while ensuring availability and minimizing downtime?

A.
Implement EC2 Auto Scaling groups cooldown periods. Use EC2 instance metadata to determine the instance state, and an AWS Lambda function to snapshot Amazon EBS volumes to preserve system logs.
A.
Implement EC2 Auto Scaling groups cooldown periods. Use EC2 instance metadata to determine the instance state, and an AWS Lambda function to snapshot Amazon EBS volumes to preserve system logs.
Answers
B.
Implement Amazon CloudWatch Events rules. Create an AWS Lambda function that can react to an instance termination to deploy the CloudWatch Logs agent to upload the system and access logs to Amazon S3 for analysis.
B.
Implement Amazon CloudWatch Events rules. Create an AWS Lambda function that can react to an instance termination to deploy the CloudWatch Logs agent to upload the system and access logs to Amazon S3 for analysis.
Answers
C.
Terminate the EC2 instances manually. The Auto Scaling service will upload all log information to CloudWatch Logs for analysis prior to instance termination.
C.
Terminate the EC2 instances manually. The Auto Scaling service will upload all log information to CloudWatch Logs for analysis prior to instance termination.
Answers
D.
Implement EC2 Auto Scaling groups with lifecycle hooks. Create an AWS Lambda function that can modify an EC2 instance lifecycle hook into a standby state, extract logs from the instance through a remote script execution, and place them in an Amazon S3 bucket for analysis.
D.
Implement EC2 Auto Scaling groups with lifecycle hooks. Create an AWS Lambda function that can modify an EC2 instance lifecycle hook into a standby state, extract logs from the instance through a remote script execution, and place them in an Amazon S3 bucket for analysis.
Answers
Suggested answer: D

A healthcare provider has a hybrid architecture that includes 120 on-premises VMware servers running RedHat and 50 Amazon EC2 instances running Amazon Linux. The company is in the middle of an all-in migration to AWS and wants to implement a solution for collecting information from the on-premises virtual machines and the EC2 instances for data analysis. The information includes:

- Operating system type and version

- Data for installed applications

- Network configuration information, such as MAC and IP addresses

- Amazon EC2 instance AMI ID and IAM profile

How can these requirements be met with the LEAST amount of administration?

A.
Write a shell script to run as a cron job on EC2 instances to collect and push the data to Amazon S3. For on-premises resources, use VMware vSphere to collect the data and write it into a file gateway for storing the data in S3. Finally, use Amazon Athena on the S3 bucket for analytics.
A.
Write a shell script to run as a cron job on EC2 instances to collect and push the data to Amazon S3. For on-premises resources, use VMware vSphere to collect the data and write it into a file gateway for storing the data in S3. Finally, use Amazon Athena on the S3 bucket for analytics.
Answers
B.
Use a script on the on-premises virtual machines as well as the EC2 instances to gather and push the data into Amazon S3, and then use Amazon Athena for analytics.
B.
Use a script on the on-premises virtual machines as well as the EC2 instances to gather and push the data into Amazon S3, and then use Amazon Athena for analytics.
Answers
C.
Install AWS Systems Manager agents on both the on-premises virtual machines and the EC2 instances. Enable inventory collection and configure resource data sync to an Amazon S3 bucket to analyze the data with Amazon Athena.
C.
Install AWS Systems Manager agents on both the on-premises virtual machines and the EC2 instances. Enable inventory collection and configure resource data sync to an Amazon S3 bucket to analyze the data with Amazon Athena.
Answers
D.
Use AWS Application Discovery Service for deploying Agentless Discovery Connector in the VMware environment and Discovery Agents on the EC2 instances for collecting the data. Then use the AWS Migration Hub Dashboard for analytics.
D.
Use AWS Application Discovery Service for deploying Agentless Discovery Connector in the VMware environment and Discovery Agents on the EC2 instances for collecting the data. Then use the AWS Migration Hub Dashboard for analytics.
Answers
Suggested answer: C

Which is the proper syntax for referencing a variable's value in an Ansible task?

A.
${variable_name}
A.
${variable_name}
Answers
B.
{ variable_name }
B.
{ variable_name }
Answers
C.
"{{ variable_name }}"
C.
"{{ variable_name }}"
Answers
D.
@variable_name
D.
@variable_name
Answers
Suggested answer: C

Explanation:

We use the variable's name to reference the variable which we encapsulate in curly brackets `{{ }}'; however, the YAML syntax dictates that a string beginning with a curly bracket denotes a dictionary value. To get around this, it is proper to wrap the variable declaration in quotes.

Reference: http://docs.ansible.com/ansible/playbooks_variables.html#hey-wait-a-yaml-gotcha

The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? (Choose two.)

A.
Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent.
A.
Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent.
Answers
B.
Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail.
B.
Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail.
Answers
C.
Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools.
C.
Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools.
Answers
D.
Using AWS CloudFormation, create a CloudWatch Logs LogGroup. Because the Cloudwatch Log agent automatically sends all operating system logs, you only have to configure the application logs for sending off-machine. Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.
D.
Using AWS CloudFormation, create a CloudWatch Logs LogGroup. Because the Cloudwatch Log agent automatically sends all operating system logs, you only have to configure the application logs for sending off-machine. Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.
Answers
Suggested answer: A, C

What is the expected behavior if Ansible is called with ‘ansible-playbook -i localhost playbook.yml’?

A.
Ansible will attempt to read the inventory file named ‘localhost’
A.
Ansible will attempt to read the inventory file named ‘localhost’
Answers
B.
Ansible will run the plays locally.
B.
Ansible will run the plays locally.
Answers
C.
Ansible will run the playbook on the host named ‘localhost’
C.
Ansible will run the playbook on the host named ‘localhost’
Answers
D.
Ansible won't run, this is invalid command line syntax
D.
Ansible won't run, this is invalid command line syntax
Answers
Suggested answer: A

Explanation:

Ansible expects an inventory filename with the ‘-i’ option, regardless if it is a valid hostname. For this to execute on the host `localhost' resolves to, a comma needs to be appended to the end.

Reference: http://docs.ansible.com/ansible/intro_inventory.html#inventory


After presenting a working proof of concept for a new application that uses AWS API Gateway, a Developer must set up a team development environment for the project. Due to a tight timeline, the Developer wants to minimize time spent on infrastructure setup, and would like to reuse the code repository created for the proof of concept. Currently, all source code is stored in AWS CodeCommit. Company policy mandates having alpha, beta, and production stages with separate Jenkins servers to build code and run tests for every stage. The Development Manager must have the ability to block code propagation between admins at any time. The Security team wants to make sure that users will not be able to modify the environment without permission. How can this be accomplished?

A.
Create API Gateway alpha, beta, and production stages. Create a CodeCommit trigger to deploy code to the different stages using an AWS Lambda function.
A.
Create API Gateway alpha, beta, and production stages. Create a CodeCommit trigger to deploy code to the different stages using an AWS Lambda function.
Answers
B.
Create API Gateway alpha, beta, and production stages. Create an AWS CodePipeline that pulls code from the CodeCommit repository. Create CodePipeline actions to deploy code to the API Gateway stages.
B.
Create API Gateway alpha, beta, and production stages. Create an AWS CodePipeline that pulls code from the CodeCommit repository. Create CodePipeline actions to deploy code to the API Gateway stages.
Answers
C.
Create Jenkins servers for the alpha, beta, and production stages on Amazon EC2 instances. Create multiple CodeCommit triggers to deploy code to different stages using an AWS Lambda function.
C.
Create Jenkins servers for the alpha, beta, and production stages on Amazon EC2 instances. Create multiple CodeCommit triggers to deploy code to different stages using an AWS Lambda function.
Answers
D.
Create an AWS CodePipeline pipeline that pulls code from the CodeCommit repository. Create alpha, beta, and production stages with Jenkins servers on CodePipeline.
D.
Create an AWS CodePipeline pipeline that pulls code from the CodeCommit repository. Create alpha, beta, and production stages with Jenkins servers on CodePipeline.
Answers
Suggested answer: D

A root account has created an IAM group and defined the policy as:

What will this policy do?

A.
Allow this group to view the password policy of all the users added only to that group
A.
Allow this group to view the password policy of all the users added only to that group
Answers
B.
Allow all the users of IAM to modify their password
B.
Allow all the users of IAM to modify their password
Answers
C.
Allow an IAM user in this group to view the password policy and modify only his/her password
C.
Allow an IAM user in this group to view the password policy and modify only his/her password
Answers
D.
Allow this group to view the password policy of all the IAM users
D.
Allow this group to view the password policy of all the IAM users
Answers
Suggested answer: C

Explanation:

This IAM policy grants access to the ChangePassword action, which lets the users use the console, the CLI, or the API to change their passwords. The Resource element uses a policy variable (aws:username), which is useful in policies that are attached to groups. The aws:username key resolves to the name of the current IAM user when a request is made, so that each user is allowed permission to change only his or her own password . This policy will allow all the users of this group to modify the passwords of all the IAM users.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/HowToPwdIAMUser.html

Total 557 questions
Go to page: of 56