ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











A company requires that its internally facing web application be highly available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data.

Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.)

A.
Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables.
A.
Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables.
Answers
B.
Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.
B.
Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.
Answers
C.
Configure an Application Load Balancer in front of the EC2 instance. Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure.
C.
Configure an Application Load Balancer in front of the EC2 instance. Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure.
Answers
D.
Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.
D.
Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.
Answers
E.
Replace the NAT instances with a NAT gateway that spans multiple Availability Zones. Update the route tables.
E.
Replace the NAT instances with a NAT gateway that spans multiple Availability Zones. Update the route tables.
Answers
Suggested answer: B, D

A startup company is developing a web application on AWS. It plans to use Amazon RDS for persistence and deploy the application to Amazon EC2 with an Auto Scaling group. The company would also like to separate the environments for development, testing, and production.

What is the MOST secure approach to manage the application configuration?

A.
Create a property file to include the configuration and the encrypted passwords. Check in the property file to the source repository, package the property file with the application, and deploy the application. Create an environment tag for the EC2 instances and tag the instances respectively. The application will extract the necessary property values based on the environment tag.
A.
Create a property file to include the configuration and the encrypted passwords. Check in the property file to the source repository, package the property file with the application, and deploy the application. Create an environment tag for the EC2 instances and tag the instances respectively. The application will extract the necessary property values based on the environment tag.
Answers
B.
Create a property file for each environment to include the environment-specific configuration and an encrypted password. Check in the property files to the source repository. During deployment, use only the environment-specific property file with the application. The application will read the needed property values from the deployed property file.
B.
Create a property file for each environment to include the environment-specific configuration and an encrypted password. Check in the property files to the source repository. During deployment, use only the environment-specific property file with the application. The application will read the needed property values from the deployed property file.
Answers
C.
Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the passwords in the bucket with AWS KMS encryption. During deployment, the application will read the needed property values from the environment-specific property file in the S3 bucket.
C.
Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the passwords in the bucket with AWS KMS encryption. During deployment, the application will read the needed property values from the environment-specific property file in the S3 bucket.
Answers
D.
Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the encrypted passwords in the AWS Systems Manager Parameter Store. Create an environment tag for the EC2 instances and tag the instances respectively. The application will read the needed property values from the environment-specific property file in the S3 bucket and the parameter store.
D.
Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the encrypted passwords in the AWS Systems Manager Parameter Store. Create an environment tag for the EC2 instances and tag the instances respectively. The application will read the needed property values from the environment-specific property file in the S3 bucket and the parameter store.
Answers
Suggested answer: D

A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint. Customers have been complaining about high response latencies, which the development team has verified using the API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data without introducing additional latency. Which actions should be taken to accomplish this? (Choose two.)

A.
Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
A.
Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
Answers
B.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
B.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
Answers
C.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
C.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
Answers
D.
Modify the on-premises application to send log information back to API Gateway with each request.
D.
Modify the on-premises application to send log information back to API Gateway with each request.
Answers
E.
Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.
E.
Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.
Answers
Suggested answer: C, E

A company has a hybrid architecture solution in which some legacy systems remain on-premises, while a specific cluster of servers is moved to AWS. The company cannot reconfigure the legacy systems, so the cluster nodes must have a fixed hostname and local IP address for each server that is part of the cluster. The DevOps Engineer must automate the configuration for a six-node cluster with high availability across three Availability Zones (AZs), placing two elastic network interfaces in a specific subnet for each AZ. Each node's hostname and local IP address should remain the same between reboots or instance failures. Which solution involves the LEAST amount of effort to automate this task?

A.
Create an AWS Elastic Beanstalk application and a specific environment for each server of the cluster. For each environment, give the hostname, elastic network interface, and AZ as input parameters. Use the local health agent to name the instance and attach a specific elastic network interface based on the current environment.
A.
Create an AWS Elastic Beanstalk application and a specific environment for each server of the cluster. For each environment, give the hostname, elastic network interface, and AZ as input parameters. Use the local health agent to name the instance and attach a specific elastic network interface based on the current environment.
Answers
B.
Create a reusable AWS CloudFormation template to manage an Amazon EC2 Auto Scaling group with a minimum size of 1 and a maximum size of 1. Give the hostname, elastic network interface, and AZ as stack parameters. Use those parameters to set up an EC2 instance with EC2 Auto Scaling and a user data script to attach to the specific elastic network interface. Use CloudFormation nested stacks to nest the template six times for a total of six nodes needed for the cluster, and deploy using the master template.
B.
Create a reusable AWS CloudFormation template to manage an Amazon EC2 Auto Scaling group with a minimum size of 1 and a maximum size of 1. Give the hostname, elastic network interface, and AZ as stack parameters. Use those parameters to set up an EC2 instance with EC2 Auto Scaling and a user data script to attach to the specific elastic network interface. Use CloudFormation nested stacks to nest the template six times for a total of six nodes needed for the cluster, and deploy using the master template.
Answers
C.
Create an Amazon DynamoDB table with the list of hostnames, subnets, and elastic network interfaces to be used. Create a single AWS CloudFormation template to manage an Auto Scaling group with a minimum size of 6 and a maximum size of 6. Create a programmatic solution that is installed in each instance that will lock/release the assignment of each hostname and local IP address, depending on the subnet in which a new instance will be launched.
C.
Create an Amazon DynamoDB table with the list of hostnames, subnets, and elastic network interfaces to be used. Create a single AWS CloudFormation template to manage an Auto Scaling group with a minimum size of 6 and a maximum size of 6. Create a programmatic solution that is installed in each instance that will lock/release the assignment of each hostname and local IP address, depending on the subnet in which a new instance will be launched.
Answers
D.
Create a reusable AWS CLI script to launch each instance individually, which will name the instance, place it in a specific AZ, and attach a specific elastic network interface. Monitor the instances, and in the event of failure, replace the missing instance manually by running the script again.
D.
Create a reusable AWS CLI script to launch each instance individually, which will name the instance, place it in a specific AZ, and attach a specific elastic network interface. Monitor the instances, and in the event of failure, replace the missing instance manually by running the script again.
Answers
Suggested answer: B

A company has 100 GB of log data in an Amazon S3 bucket stored in .csv format. SQL developers want to query this data and generate graphs to visualize it. They also need an efficient, automated way to store metadata from the .csv file. Which combination of steps should be taken to meet these requirements with the LEAST amount of effort? (Choose three.)

A.
Filter the data through AWS X-Ray to visualize the data.
A.
Filter the data through AWS X-Ray to visualize the data.
Answers
B.
Filter the data through Amazon QuickSight to visualize the data.
B.
Filter the data through Amazon QuickSight to visualize the data.
Answers
C.
Query the data with Amazon Athena.
C.
Query the data with Amazon Athena.
Answers
D.
Query the data with Amazon Redshift.
D.
Query the data with Amazon Redshift.
Answers
E.
Use AWS Glue as the persistent metadata store.
E.
Use AWS Glue as the persistent metadata store.
Answers
F.
Use Amazon S3 as the persistent metadata store.
F.
Use Amazon S3 as the persistent metadata store.
Answers
Suggested answer: B, C, F

A company must collect user consent to a privacy agreement. The company deploys an application in six AWS Regions: two Regions in North America, two Regions in Europe, and two Regions in Asia. The application has a user base of 20 million to 30 million users.

The company needs to read and write data that is related to each user’s response. The company also must ensure that the responses are available in all six Regions. Which solution will meet these requirements with the LOWEST latency of reads and writes?

A.
Implement Amazon Elasticsearch Service (Amazon ES) in each of the six Regions.
A.
Implement Amazon Elasticsearch Service (Amazon ES) in each of the six Regions.
Answers
B.
Implement Amazon DocumentDB (with MongoDB compatibility) in each of the six Regions.
B.
Implement Amazon DocumentDB (with MongoDB compatibility) in each of the six Regions.
Answers
C.
Implement Amazon DynamoDB global tables in each of the six Regions.
C.
Implement Amazon DynamoDB global tables in each of the six Regions.
Answers
D.
Implement Amazon ElastiCache for Redis replication groups in each of the six Regions.
D.
Implement Amazon ElastiCache for Redis replication groups in each of the six Regions.
Answers
Suggested answer: C

You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge?

A.
Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
A.
Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
Answers
B.
Submit a ticket to the AWS Forums. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two.
B.
Submit a ticket to the AWS Forums. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two.
Answers
C.
Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
C.
Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
Answers
D.
Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
D.
Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
Answers
Suggested answer: D

Explanation:

Custom resources provide a way for you to write custom provisioning logic in AWS CloudFormation template and have AWS CloudFormation run it during a stack operation, such as when you create, update or delete a stack. For more information, see Custom Resources.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-customresources.html

What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

A.
Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead.
A.
Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead.
Answers
B.
Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC.
B.
Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC.
Answers
C.
Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
C.
Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
Answers
D.
Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.
D.
Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.
Answers
Suggested answer: D

Explanation:

You are not guaranteed 10gigabit performance, except within a placement group. A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low- latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.)

A.
Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk Cloudwatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
A.
Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk Cloudwatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
Answers
B.
Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
B.
Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
Answers
C.
Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
C.
Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
Answers
D.
Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
D.
Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
Answers
E.
Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
E.
Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
Answers
F.
Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.
F.
Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.
Answers
Suggested answer: B, D, E

A company is using AWS CodeCommit as its source code repository. After an internal audit, the compliance team mandates that any code change that go into the master branch must be committed by senior developers. Which solution will meet these requirements?

A.
Create two repositories in CodeCommit: one for working and another for the master. Create separate IAM groups for senior developers and developers. Assign the resource-level permissions on the repositories tied to the IAM groups. After the code changes are reviewed, sync the approved files to the master code commit repository.
A.
Create two repositories in CodeCommit: one for working and another for the master. Create separate IAM groups for senior developers and developers. Assign the resource-level permissions on the repositories tied to the IAM groups. After the code changes are reviewed, sync the approved files to the master code commit repository.
Answers
B.
Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Assign code commit permissions for both groups, with code merge permissions for the senior developers group. Create a trigger to notify senior developers with a URL link to approve or deny commit requests delivered through Amazon SNS. Once a senior developer approves the code, the code gets merged to the master branch.
B.
Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Assign code commit permissions for both groups, with code merge permissions for the senior developers group. Create a trigger to notify senior developers with a URL link to approve or deny commit requests delivered through Amazon SNS. Once a senior developer approves the code, the code gets merged to the master branch.
Answers
C.
Create a repository in CodeCommit with a working and master branch. Create separate IAM groups for senior developers and developers. Use an IAM policy to assign each IAM group their corresponding branches. Once the code is merged to the working branch, senior developers can pull the changes from the working branch to the master branch.
C.
Create a repository in CodeCommit with a working and master branch. Create separate IAM groups for senior developers and developers. Use an IAM policy to assign each IAM group their corresponding branches. Once the code is merged to the working branch, senior developers can pull the changes from the working branch to the master branch.
Answers
D.
Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Use AWS Lambda triggers on the master branch and get the user name of the developer at the event object of the Lambda function. Validate the user name with the IAM group to approve or deny the commit.
D.
Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Use AWS Lambda triggers on the master branch and get the user name of the developer at the event object of the Lambda function. Validate the user name with the IAM group to approve or deny the commit.
Answers
Suggested answer: A
Total 557 questions
Go to page: of 56