ExamGecko
Home Home / Amazon / DOP-C02

Amazon DOP-C02 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











A company's application uses a fleet of Amazon EC2 On-Demand Instances to analyze and process data. The EC2 instances are in an Auto Scaling group. The Auto Scaling group is a target group for an Application Load Balancer (ALB). The application analyzes critical data that cannot tolerate interruption. The application also analyzes noncritical data that can withstand interruption.

The critical data analysis requires quick scalability in response to real-time application demand. The noncritical data analysis involves memory consumption. A DevOps engineer must implement a solution that reduces scale-out latency for the critical data. The solution also must process the noncritical data.

Which combination of steps will meet these requirements? (Select TWO.)

A.
For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a new
A.
For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a new
Answers
B.
For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a new
B.
For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a new
Answers
C.
For the critical data. modify the existing Auto Scaling group. Create a lifecycle hook to ensure that bootstrap scripts are completed successfully. Ensure that the application on the instances is ready to accept traffic before the instances are registered. Create a new version of the launch template that has detailed monitoring enabled.
C.
For the critical data. modify the existing Auto Scaling group. Create a lifecycle hook to ensure that bootstrap scripts are completed successfully. Ensure that the application on the instances is ready to accept traffic before the instances are registered. Create a new version of the launch template that has detailed monitoring enabled.
Answers
D.
For the noncritical data, create a second Auto Scaling group that uses a launch template. Configure the launch template to install the unified Amazon CloudWatch agent and to configure the CloudWatch agent with a custom memory utilization metric. Use Spot Instances. Add the new Auto Scaling group as the target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
D.
For the noncritical data, create a second Auto Scaling group that uses a launch template. Configure the launch template to install the unified Amazon CloudWatch agent and to configure the CloudWatch agent with a custom memory utilization metric. Use Spot Instances. Add the new Auto Scaling group as the target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
Answers
E.
For the noncritical data, create a second Auto Scaling group. Choose the predefined memory utilization metric type for the target tracking scaling policy. Use Spot Instances. Add the new Auto Scaling group as the target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
E.
For the noncritical data, create a second Auto Scaling group. Choose the predefined memory utilization metric type for the target tracking scaling policy. Use Spot Instances. Add the new Auto Scaling group as the target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
Answers
Suggested answer: B, D

Explanation:

For the critical data, using a warm pool1can reduce the scale-out latency by having pre-initialized EC2 instances ready to serve the application traffic.Using On-Demand Instances can ensure that the instances are always available and not interrupted by Spot interruptions2.

For the noncritical data, using a second Auto Scaling group with Spot Instances can reduce the cost and leverage the unused capacity of EC23.Using a launch template with the CloudWatch agent4can enable the collection of memory utilization metrics, which can be used to scale the group based on the memory demand. Adding the second group as a target group for the ALB and modifying the application to use two target groups can enable routing the traffic based on the data type.

A growing company manages more than 50 accounts in an organization in AWS Organizations. The company has configured its applications to send logs to Amazon CloudWatch Logs.

A DevOps engineer needs to aggregate logs so that the company can quickly search the logs to respond to future security incidents. The DevOps engineer has created a new AWS account for centralized monitoring.

Which combination of steps should the DevOps engineer take to make the application logs searchable from the monitoring account? (Select THREE.)

A.
In the monitoring account, download an AWS CloudFormation template from CloudWatch to use in Organizations. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
A.
In the monitoring account, download an AWS CloudFormation template from CloudWatch to use in Organizations. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
Answers
B.
Create an AWS CloudFormation template that defines an IAM role. Configure the role to allow logs-amazonaws.com to perform the logs:Link action if the aws:ResourceAccount property is equal to the monitoring account ID. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
B.
Create an AWS CloudFormation template that defines an IAM role. Configure the role to allow logs-amazonaws.com to perform the logs:Link action if the aws:ResourceAccount property is equal to the monitoring account ID. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
Answers
C.
Create an IAM role in the monitoring account. Attach a trust policy that allows logs.amazonaws.com to perform the iam:CreateSink action if the aws:PrincipalOrgld property is equal to the organization ID.
C.
Create an IAM role in the monitoring account. Attach a trust policy that allows logs.amazonaws.com to perform the iam:CreateSink action if the aws:PrincipalOrgld property is equal to the organization ID.
Answers
D.
In the organization's management account, enable the logging policies for the organization.
D.
In the organization's management account, enable the logging policies for the organization.
Answers
E.
use CloudWatch Observability Access Manager in the monitoring account to create a sink. Allow logs to be shared with the monitoring account. Configure the monitoring account data selection to view the Observability data from the organization ID.
E.
use CloudWatch Observability Access Manager in the monitoring account to create a sink. Allow logs to be shared with the monitoring account. Configure the monitoring account data selection to view the Observability data from the organization ID.
Answers
F.
In the monitoring account, attach the CloudWatchLogsReadOnlyAccess AWS managed policy to an IAM role that can be assumed to search the logs.
F.
In the monitoring account, attach the CloudWatchLogsReadOnlyAccess AWS managed policy to an IAM role that can be assumed to search the logs.
Answers
Suggested answer: B, C, F

Explanation:

To aggregate logs from multiple accounts in an organization, the DevOps engineer needs to create a cross-account subscription1that allows the monitoring account to receive log events from the sharing accounts.

To enable cross-account subscription, the DevOps engineer needs to create an IAM role in each sharing account that grants permission to CloudWatch Logs to link the log groups to the destination in the monitoring account2.This can be done using a CloudFormation template and StackSets3to deploy the role to all accounts in the organization.

The DevOps engineer also needs to create an IAM role in the monitoring account that allows CloudWatch Logs to create a sink for receiving log events from other accounts4. The role must have a trust policy that specifies the organization ID as a condition.

Finally, the DevOps engineer needs to attach the CloudWatchLogsReadOnlyAccess policy5to an IAM role in the monitoring account that can be used to search the logs from the cross-account subscription.

A company has microservices running in AWS Lambda that read data from Amazon DynamoDB. The Lambda code is manually deployed by developers after successful testing The company now needs the tests and deployments be automated and run in the cloud Additionally, traffic to the new versions of each microservice should be incrementally shifted over time after deployment.

What solution meets all the requirements, ensuring the MOST developer velocity?

A.
Create an AWS CodePipelme configuration and set up a post-commit hook to trigger the pipeline after tests have passed Use AWS CodeDeploy and create a Canary deployment configuration that specifies the percentage of traffic and interval
A.
Create an AWS CodePipelme configuration and set up a post-commit hook to trigger the pipeline after tests have passed Use AWS CodeDeploy and create a Canary deployment configuration that specifies the percentage of traffic and interval
Answers
B.
Create an AWS CodeBuild configuration that triggers when the test code is pushed Use AWS CloudFormation to trigger an AWS CodePipelme configuration that deploys the new Lambda versions and specifies the traffic shift percentage and interval
B.
Create an AWS CodeBuild configuration that triggers when the test code is pushed Use AWS CloudFormation to trigger an AWS CodePipelme configuration that deploys the new Lambda versions and specifies the traffic shift percentage and interval
Answers
C.
Create an AWS CodePipelme configuration and set up the source code step to trigger when code is pushed. Set up the build step to use AWS CodeBuild to run the tests Set up an AWS CodeDeploy configuration to deploy, then select the CodeDeployDefault.LambdaLinearlDPercentEvery3Minut.es Option.
C.
Create an AWS CodePipelme configuration and set up the source code step to trigger when code is pushed. Set up the build step to use AWS CodeBuild to run the tests Set up an AWS CodeDeploy configuration to deploy, then select the CodeDeployDefault.LambdaLinearlDPercentEvery3Minut.es Option.
Answers
D.
Use the AWS CLI to set up a post-commit hook that uploads the code to an Amazon S3 bucket after tests have passed. Set up an S3 event trigger that runs a Lambda function that deploys the new version. Use an interval in the Lambda function to deploy the code over time at the required percentage
D.
Use the AWS CLI to set up a post-commit hook that uploads the code to an Amazon S3 bucket after tests have passed. Set up an S3 event trigger that runs a Lambda function that deploys the new version. Use an interval in the Lambda function to deploy the code over time at the required percentage
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html

A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto Scaling group are failing to respond to user requests. The EC2 instances are also failing target group HTTP health checks

Upon inspection, the engineer notices the application process was not running in any EC2 instances. There are a significant number of out of memory messages in the system logs. The engineer needs to improve the resilience of the application to cope with a potential application memory leak. Monitoring and notifications should be enabled to alert when there is an issue

Which combination of actions will meet these requirements? (Select TWO.)

A.
Change the Auto Scaling configuration to replace the instances when they fail the load balancer's health checks.
A.
Change the Auto Scaling configuration to replace the instances when they fail the load balancer's health checks.
Answers
B.
Change the target group health check HealthChecklntervalSeconds parameter to reduce the interval between health checks.
B.
Change the target group health check HealthChecklntervalSeconds parameter to reduce the interval between health checks.
Answers
C.
Change the target group health checks from HTTP to TCP to check if the port where the application is listening is reachable.
C.
Change the target group health checks from HTTP to TCP to check if the port where the application is listening is reachable.
Answers
D.
Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group Create an alarm when the memory utilization is high Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off
D.
Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group Create an alarm when the memory utilization is high Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off
Answers
E.
Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
E.
Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
Answers
Suggested answer: A, E

Explanation:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html

A company has a single developer writing code for an automated deployment pipeline. The developer is storing source code in an Amazon S3 bucket for each project. The company wants to add more developers to the team but is concerned about code conflicts and lost work The company also wants to build a test environment to deploy newer versions of code for testing and allow developers to automatically deploy to both environments when code is changed in the repository.

What is the MOST efficient way to meet these requirements?

A.
Create an AWS CodeCommit repository tor each project, use the mam branch for production code: and create a testing branch for code deployed to testing Use feature branches to develop new features and pull requests to merge code to testing and main branches.
A.
Create an AWS CodeCommit repository tor each project, use the mam branch for production code: and create a testing branch for code deployed to testing Use feature branches to develop new features and pull requests to merge code to testing and main branches.
Answers
B.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets Enable versioning on all buckets to prevent code conflicts.
B.
Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets Enable versioning on all buckets to prevent code conflicts.
Answers
C.
Create an AWS CodeCommit repository for each project, and use the main branch for production and test code with different deployment pipelines for each environment Use feature branches to develop new features.
C.
Create an AWS CodeCommit repository for each project, and use the main branch for production and test code with different deployment pipelines for each environment Use feature branches to develop new features.
Answers
D.
Enable versioning and branching on each S3 bucket, use the main branch for production code, and create a testing branch for code deployed to testing. Have developers use each branch for developing in each environment.
D.
Enable versioning and branching on each S3 bucket, use the main branch for production code, and create a testing branch for code deployed to testing. Have developers use each branch for developing in each environment.
Answers
Suggested answer: A

Explanation:

Creating an AWS CodeCommit repository for each project, using the main branch for production code, and creating a testing branch for code deployed to testing will meet the requirements.AWS CodeCommit is a managed revision control service that hosts Git repositories and works with all Git-based tools1. By using feature branches to develop new features and pull requests to merge code to testing and main branches, the developers can avoid code conflicts and lost work, and also implement code reviews and approvals. Option B is incorrect because creating another S3 bucket for each project for testing code and using an AWS Lambda function to promote code changes between testing and production buckets will not provide the benefits of revision control, such as tracking changes, branching, merging, and collaborating. Option C is incorrect because using the main branch for production and test code with different deployment pipelines for each environment will not allow the developers to test their code changes before deploying them to production. Option D is incorrect because enabling versioning and branching on each S3 bucket will not work with Git-based tools and will not provide the same level of revision control as AWS CodeCommit.Reference:

AWS CodeCommit

Certified DevOps Engineer - Professional (DOP-C02) Study Guide(page 182)

A company manages a multi-tenant environment in its VPC and has configured Amazon GuardDuty for the corresponding AWS account. The company sends all GuardDuty findings to AWS Security Hub.

Traffic from suspicious sources is generating a large number of findings. A DevOps engineer needs to implement a solution to automatically deny traffic across the entire VPC when GuardDuty discovers a new suspicious source.

Which solution will meet these requirements?

A.
Create a GuardDuty threat list. Configure GuardDuty to reference the list. Create an AWS Lambda function that will update the threat list Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty.
A.
Create a GuardDuty threat list. Configure GuardDuty to reference the list. Create an AWS Lambda function that will update the threat list Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty.
Answers
B.
Configure an AWS WAF web ACL that includes a custom rule group. Create an AWS Lambda function that will create a block rule in the custom rule group Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty
B.
Configure an AWS WAF web ACL that includes a custom rule group. Create an AWS Lambda function that will create a block rule in the custom rule group Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty
Answers
C.
Configure a firewall in AWS Network Firewall. Create an AWS Lambda function that will create a Drop action rule in the firewall policy Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty
C.
Configure a firewall in AWS Network Firewall. Create an AWS Lambda function that will create a Drop action rule in the firewall policy Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty
Answers
D.
Create an AWS Lambda function that will create a GuardDuty suppression rule. Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty.
D.
Create an AWS Lambda function that will create a GuardDuty suppression rule. Configure the Lambda function to run in response to new Security Hub findings that come from GuardDuty.
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/security/automatically-block-suspicious-traffic-with-aws-network-firewall-and-amazon-guardduty/

An ecommerce company uses a large number of Amazon Elastic Block Store (Amazon EBS) backed Amazon EC2 instances. To decrease manual work across all the instances, a DevOps engineer is tasked with automating restart actions when EC2 instance retirement events are scheduled.

How can this be accomplished?

A.
Create a scheduled Amazon EventBridge rule to run an AWS Systems Manager Automation runbook that checks if any EC2 instances are scheduled for retirement once a week If the instance is scheduled for retirement the runbook will hibernate the instance
A.
Create a scheduled Amazon EventBridge rule to run an AWS Systems Manager Automation runbook that checks if any EC2 instances are scheduled for retirement once a week If the instance is scheduled for retirement the runbook will hibernate the instance
Answers
B.
Enable EC2Auto Recovery on all of the instances. Create an AWS Config rule to limit the recovery to occur during a maintenance window only
B.
Enable EC2Auto Recovery on all of the instances. Create an AWS Config rule to limit the recovery to occur during a maintenance window only
Answers
C.
Reboot all EC2 instances during an approved maintenance window that is outside of standard business hours Set up Amazon CloudWatch alarms to send a notification in case any instance is failing EC2 instance status checks
C.
Reboot all EC2 instances during an approved maintenance window that is outside of standard business hours Set up Amazon CloudWatch alarms to send a notification in case any instance is failing EC2 instance status checks
Answers
D.
Set up an AWS Health Amazon EventBridge rule to run AWS Systems Manager Automation runbooks that stop and start the EC2 instance when a retirement scheduled event occurs.
D.
Set up an AWS Health Amazon EventBridge rule to run AWS Systems Manager Automation runbooks that stop and start the EC2 instance when a retirement scheduled event occurs.
Answers
Suggested answer: D

Explanation:

https://aws.amazon.com/blogs/mt/automate-remediation-actions-for-amazon-ec2-notifications-and-beyond-using-ec2-systems-manager-automation-and-aws-health/


A DevOps engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group. The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform in-place deployments with codeDeployDefault.oneAtATime During an ongoing new deployment, the engineer discovers that, although the overall deployment finished successfully, two out of five instances have the previous application revision deployed. The other three instances have the newest application revision

What is likely causing this issue?

A.
The two affected instances failed to fetch the new deployment.
A.
The two affected instances failed to fetch the new deployment.
Answers
B.
A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances
B.
A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances
Answers
C.
The CodeDeploy agent was not installed in two affected instances.
C.
The CodeDeploy agent was not installed in two affected instances.
Answers
D.
EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
D.
EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
Answers
Suggested answer: B

Explanation:

When AWS CodeDeploy performs an in-place deployment, it updates the instances with the new application revision one at a time, as specified by the deployment configurationcodeDeployDefault.oneAtATime. If a lifecycle event hook, such asAfterInstall, fails during the deployment, CodeDeploy will attempt to roll back to the previous version on the affected instances. This is likely what happened with the two instances that still have the previous application revision deployed. The failure of theAfterInstalllifecycle event hook triggered the rollback mechanism, resulting in those instances reverting to the previous application revision.

AWS CodeDeploy documentation on redeployment and rollback procedures1.

Stack Overflow discussions on re-deploying older revisions with AWS CodeDeploy2.

AWS CLI reference guide for deploying a revision2.

A company recently deployed its web application on AWS. The company is preparing for a large-scale sales event and must ensure that the web application can scale to meet the demand

The application's frontend infrastructure includes an Amazon CloudFront distribution that has an Amazon S3 bucket as an origin. The backend infrastructure includes an Amazon API Gateway API. several AWS Lambda functions, and an Amazon Aurora DB cluster

The company's DevOps engineer conducts a load test and identifies that the Lambda functions can fulfill the peak number of requests However, the DevOps engineer notices request latency during the initial burst of requests Most of the requests to the Lambda functions produce queries to the database A large portion of the invocation time is used to establish database connections

Which combination of steps will provide the application with the required scalability? (Select TWO)

A.
Configure a higher reserved concurrency for the Lambda functions.
A.
Configure a higher reserved concurrency for the Lambda functions.
Answers
B.
Configure a higher provisioned concurrency for the Lambda functions
B.
Configure a higher provisioned concurrency for the Lambda functions
Answers
C.
Convert the DB cluster to an Aurora global database Add additional Aurora Replicas in AWS Regions based on the locations of the company's customers.
C.
Convert the DB cluster to an Aurora global database Add additional Aurora Replicas in AWS Regions based on the locations of the company's customers.
Answers
D.
Refactor the Lambda Functions Move the code blocks that initialize database connections into the function handlers.
D.
Refactor the Lambda Functions Move the code blocks that initialize database connections into the function handlers.
Answers
E.
Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambda functions to use the proxy endpoints for database connections.
E.
Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambda functions to use the proxy endpoints for database connections.
Answers
Suggested answer: B, E

Explanation:

The correct answer is B and E. Configuring a higher provisioned concurrency for the Lambda functions will ensure that the functions are ready to respond to the initial burst of requests without any cold start latency. Using Amazon RDS Proxy to create a proxy for the Aurora database will enable the Lambda functions to reuse existing database connections and reduce the overhead of establishing new ones. This will also improve the scalability and availability of the database by managing the connection pool size and handling failovers. Option A is incorrect because reserved concurrency only limits the number of concurrent executions for a function, not pre-warms them. Option C is incorrect because converting the DB cluster to an Aurora global database will not address the issue of database connection latency, and may introduce additional costs and complexity. Option D is incorrect because moving the code blocks that initialize database connections into the function handlers will not improve the performance or scalability of the Lambda functions, and may actually worsen the cold start latency.Reference:

AWS Lambda Provisioned Concurrency

Using Amazon RDS Proxy with AWS Lambda

Certified DevOps Engineer - Professional (DOP-C02) Study Guide(page 173)

A DevOps engineer wants to find a solution to migrate an application from on premises to AWS The application is running on Linux and needs to run on specific versions of Apache Tomcat HAProxy and Varnish Cache to function properly. The application's operating system-level parameters require tuning The solution must include a way to automate the deployment of new application versions. The infrastructure should be scalable and faulty servers should be replaced automatically.

Which solution should the DevOps engineer use?

A.
Upload the application as a Docker image that contains all the necessary software to Amazon ECR Create an Amazon ECS cluster using an AWS Fargate launch type and an Auto Scaling group. Create an AWS CodePipeline pipeline that uses Amazon ECR as a source and Amazon ECS as a deployment provider
A.
Upload the application as a Docker image that contains all the necessary software to Amazon ECR Create an Amazon ECS cluster using an AWS Fargate launch type and an Auto Scaling group. Create an AWS CodePipeline pipeline that uses Amazon ECR as a source and Amazon ECS as a deployment provider
Answers
B.
Upload the application code to an AWS CodeCommit repository with a saved configuration file to configure and install the software Create an AWS Elastic Beanstalk web server tier and a load balanced-type environment that uses the Tomcat solution stack Create an AWS CodePipeline pipeline that uses CodeCommit as a source and Elastic Beanstalk as a deployment provider
B.
Upload the application code to an AWS CodeCommit repository with a saved configuration file to configure and install the software Create an AWS Elastic Beanstalk web server tier and a load balanced-type environment that uses the Tomcat solution stack Create an AWS CodePipeline pipeline that uses CodeCommit as a source and Elastic Beanstalk as a deployment provider
Answers
C.
Upload the application code to an AWS CodeCommit repository with a set of ebextensions files to configure and install the software. Create an AWS Elastic Beanstalk worker tier environment that uses the Tomcat solution stack Create an AWS CodePipeline pipeline that uses CodeCommit as a source and Elastic Beanstalk as a deployment provider
C.
Upload the application code to an AWS CodeCommit repository with a set of ebextensions files to configure and install the software. Create an AWS Elastic Beanstalk worker tier environment that uses the Tomcat solution stack Create an AWS CodePipeline pipeline that uses CodeCommit as a source and Elastic Beanstalk as a deployment provider
Answers
D.
Upload the application code to an AWS CodeCommit repository with an appspec.yml file to configure and install the necessary software. Create an AWS CodeDeploy deployment group associated with an Amazon EC2 Auto Scaling group Create an AWS CodePipeline pipeline that uses CodeCommit as a source and CodeDeploy as a deployment provider
D.
Upload the application code to an AWS CodeCommit repository with an appspec.yml file to configure and install the necessary software. Create an AWS CodeDeploy deployment group associated with an Amazon EC2 Auto Scaling group Create an AWS CodePipeline pipeline that uses CodeCommit as a source and CodeDeploy as a deployment provider
Answers
Suggested answer: D

Explanation:

The correct answer is D. The scenario requires a solution that can migrate an application from on premises to AWS, run on specific versions of Apache Tomcat, HAProxy, and Varnish Cache, tune the operating system-level parameters, automate the deployment of new application versions, and scale and replace faulty servers automatically. Option D meets all these requirements by using AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, and Amazon EC2 Auto Scaling. AWS CodeCommit is a fully managed source control service that hosts Git repositories and works with Git-based tools. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, including Amazon EC2, AWS Fargate, AWS Lambda, and on-premises servers. AWS CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application updates. Amazon EC2 Auto Scaling helps maintain application availability and allows scaling of Amazon EC2 capacity up or down automatically according to the defined conditions. By using these services together, the DevOps engineer can migrate the application code to AWS, configure and install the necessary software using the appspec.yml file, automate the deployment process using the pipeline, and scale and replace the servers using the Auto Scaling group.

Option A is incorrect because AWS Fargate is a serverless compute engine for containers that works with Amazon ECS and Amazon EKS. Fargate removes the need to provision and manage servers, but it also limits the ability to tune the operating system-level parameters, which is a requirement in the scenario. Moreover, Fargate does not support HAProxy and Varnish Cache as sidecar containers, which are needed to run the application properly.

Option B is incorrect because AWS Elastic Beanstalk is a fully managed service that automates the deployment and scaling of web applications and services using familiar servers such as Apache, Nginx, Passenger, and IIS. However, Elastic Beanstalk does not support HAProxy and Varnish Cache as part of the Tomcat solution stack, which are needed to run the application properly. Moreover, Elastic Beanstalk web server tier environments are designed to serve HTTP requests, not to process background tasks, which is the purpose of worker tier environments.

Option C is incorrect because AWS Elastic Beanstalk worker tier environments are designed to process background tasks using a daemon process that runs on each Amazon EC2 instance in the environment. Worker tier environments are not suitable for running web applications that serve HTTP requests, which is the case in the scenario. Moreover, Elastic Beanstalk does not support HAProxy and Varnish Cache as part of the Tomcat solution stack, which are needed to run the application properly.

AWS CodeCommit

AWS CodeDeploy

AWS CodePipeline

Amazon EC2 Auto Scaling

AWS Fargate

AWS Elastic Beanstalk

Total 252 questions
Go to page: of 26