ExamGecko
Home Home / Amazon / DVA-C01

Amazon DVA-C01 Practice Test - Questions Answers, Page 59

Question list
Search
Search

List of questions

Search

Related questions











A developer deployed an application to an Amazon EC2 instance. The application needs to know the public IPv4 address of the instance. How can the application find this information?

A.
Query the instance metadata from http://169.254.169.254/latest/meta-data/.
A.
Query the instance metadata from http://169.254.169.254/latest/meta-data/.
Answers
B.
Query the instance user data from http://169 254.169.254/latest/user-data/.
B.
Query the instance user data from http://169 254.169.254/latest/user-data/.
Answers
C.
Query the Amazon Machine Image (AMI) information from http://169 254.169.254/latesl/metadata/ami/.
C.
Query the Amazon Machine Image (AMI) information from http://169 254.169.254/latesl/metadata/ami/.
Answers
D.
Check the hosts file of the operating system.
D.
Check the hosts file of the operating system.
Answers
Suggested answer: A

A company uses AWS CloudFormation to deploy an application that uses an Amazon API Gateway REST API with AWS Lambda function integration. The application uses Amazon DynamoDB for data persistence. The application has three stages, development, testing, and production. Each stage uses its own DynamoDB table.

The company has encountered unexpected issues when promoting changes to the production stage.

The changes were successful in the development and testing stages. A developer needs to route 20% of the traffic to the new production stage API with the next production release. The developer needs to route the remaining 80% of the traffic to the existing production stage. The solution must minimize the number of errors that any single customer experiences. Which approach should the developer take to meet these requirements?

A.
Update 20% of the planned changes to the production stage. Deploy the new production stage.Monitor the results. Repeat this process five times to test all planned changes
A.
Update 20% of the planned changes to the production stage. Deploy the new production stage.Monitor the results. Repeat this process five times to test all planned changes
Answers
B.
Update the Amazon Route 53 DNS record entry for the production stage API to use a weighted routing policy Set the weight to a value of 80. Add a second record for the production domain name Change the second routing policy to a weighted routing policy. Set the weight of the second policy to a value of 20. Change the alias of the second policy to use the testing stage API.
B.
Update the Amazon Route 53 DNS record entry for the production stage API to use a weighted routing policy Set the weight to a value of 80. Add a second record for the production domain name Change the second routing policy to a weighted routing policy. Set the weight of the second policy to a value of 20. Change the alias of the second policy to use the testing stage API.
Answers
C.
Deploy an Application Load Balancer (ALB) in front of the REST API Change the production API Amazon Route 53 record to point traffic to the ALB Register the production and testing stages as targets of the ALB with weights of 80% and 20%. respectively.
C.
Deploy an Application Load Balancer (ALB) in front of the REST API Change the production API Amazon Route 53 record to point traffic to the ALB Register the production and testing stages as targets of the ALB with weights of 80% and 20%. respectively.
Answers
D.
Configure canary settings for the production stage API. Change the percentage of traffic directed to canary deployment to 20%. Make the planned updates to the production stage Deploy the changes.
D.
Configure canary settings for the production stage API. Change the percentage of traffic directed to canary deployment to 20%. Make the planned updates to the production stage Deploy the changes.
Answers
Suggested answer: B

A development team set up a pipeline to launch a test environment. The developers want to automate tests for their application. The team created an AWS CodePipeline stage to deploy the application to a test environment in batches using AWS Elastic Beanstalk. A later CodePipeline stage contains a single action that uses AWS CodeBuild to run numerous automated Selenium-based tests on the deployed application. The team must speed up the pipeline without removing any of the individual tests.

Which set of actions will MOST effectively speed up application deployment and testing?

A.
Set up an all-at-once deployment in Elastic Beanstalk. Run tests in parallel with multiple CodeBuild actions.
A.
Set up an all-at-once deployment in Elastic Beanstalk. Run tests in parallel with multiple CodeBuild actions.
Answers
B.
Set up a rolling update in Elastic Beanstalk. Run tests in serial with a single CodeBuild action.
B.
Set up a rolling update in Elastic Beanstalk. Run tests in serial with a single CodeBuild action.
Answers
C.
Set up an immutable update in Elastic Beanstalk. Run tests in serial with a single CodeBuild action.
C.
Set up an immutable update in Elastic Beanstalk. Run tests in serial with a single CodeBuild action.
Answers
D.
Set up a traffic-splitting deployment in Elastic Beanstalk. Run tests in parallel with multiple CodeBuild actions.
D.
Set up a traffic-splitting deployment in Elastic Beanstalk. Run tests in parallel with multiple CodeBuild actions.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existingversion.html

All at once – The quickest deployment method. Suitable if you can accept a short loss of service, and if quick deployments are important to you. With this method, Elastic Beanstalk deploys the new application version to each instance. Then, the web proxy or application server might need to restart. As a result, your application might be unavailable to users (or have low availability) for a short time.

A company must encrypt sensitive data that the company will store in Amazon S3. A developer must retain total control over the company's AWS Key Management Service (AWS KMS) key and the company’s data keys. The company currently uses an on-premises hardware security module (HSM) solution. The company wants to move its key management onto AWS. Which solution will meet these requirements?

A.
Implement server-side encryption with AWS KMS managed keys (SSE-KMS). Use AWS CloudHSM to generate the KMS key and data keys to use with AWS KMS.
A.
Implement server-side encryption with AWS KMS managed keys (SSE-KMS). Use AWS CloudHSM to generate the KMS key and data keys to use with AWS KMS.
Answers
B.
Implement server-side encryption with customer-provided encryption keys (SSE-C). Use AWS CloudHSM to generate the KMS key and manage the data keys that the company will use to read and write objects to Amazon S3.
B.
Implement server-side encryption with customer-provided encryption keys (SSE-C). Use AWS CloudHSM to generate the KMS key and manage the data keys that the company will use to read and write objects to Amazon S3.
Answers
C.
Implement server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use AWS CloudHSM to generate the KMS key and manage the data keys that the company will use to read and write objects to Amazon S3.
C.
Implement server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use AWS CloudHSM to generate the KMS key and manage the data keys that the company will use to read and write objects to Amazon S3.
Answers
D.
Implement server-side encryption with AWS KMS managed keys (SSE-KMS). Use the AWS KMS custom key store feature to manage the data keys. Then read or write objects to Amazon S3 as normal.
D.
Implement server-side encryption with AWS KMS managed keys (SSE-KMS). Use the AWS KMS custom key store feature to manage the data keys. Then read or write objects to Amazon S3 as normal.
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/cloudhsm/latest/userguide/best-practices.htmlQ: Can other AWS services use CloudHSM to store and manage keys? AWS services integrate with AWS Key Management Service, which in turn is integrated with AWS CloudHSM through the KMS custom key store feature. If you want to use the server-side encryption offered by many AWS services (such as EBS, S3, or Amazon RDS), you can do so by configuring a custom key store in AWS KMS.

A media company wants to test its web application more frequently. The company deploys the application by using a separate AWS CloudFormation stack for each environment. The same CloudFormation template is deployed to each stack as the application progresses through the development lifecycle.

A developer needs to build an automated alert for the quality assurance (QA) team. The developer wants the alert to occur for new deployments in the final pre-production environment. Which solution will moot these requirements?

A.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Add a subscription to notify the QA team. Update the CloudFormation stack options to point to the SNS topic in the pro-production environment. Most Voted
A.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Add a subscription to notify the QA team. Update the CloudFormation stack options to point to the SNS topic in the pro-production environment. Most Voted
Answers
B.
Create an AWS Lambda function that notifies the QA team. Create an Amazon EventBridge rule to invoke the Lambda function on the default event bus. Filter the events on the CloudFormation service and the CloudFormation stack Amazon Resource Name (ARM).
B.
Create an AWS Lambda function that notifies the QA team. Create an Amazon EventBridge rule to invoke the Lambda function on the default event bus. Filter the events on the CloudFormation service and the CloudFormation stack Amazon Resource Name (ARM).
Answers
C.
Create an Amazon CloudWatch alarm that monitors the metrics from CloudFormation. Filter the metrics on the stack name and the stack status. Configure the alarm to notify the QA team.
C.
Create an Amazon CloudWatch alarm that monitors the metrics from CloudFormation. Filter the metrics on the stack name and the stack status. Configure the alarm to notify the QA team.
Answers
D.
Create an AWS Lambda function that notifies the QA team. Configure the event source mapping to receive events from CloudFormation. Specify the filtering values to limit invocations to the desired CloudFormation stack.
D.
Create an AWS Lambda function that notifies the QA team. Configure the event source mapping to receive events from CloudFormation. Specify the filtering values to limit invocations to the desired CloudFormation stack.
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-rollback-email/

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-rollback-email/

https://www.trendmicro.com/cloudoneconformity/knowledgebase/aws/CloudFormation/cloudformation-stack-notification.html

A developer needs to deploy an application to AWS Elastic Beanstalk for a company. The application consists of a single Docker image. The company's automated continuous integration and continuous delivery (CI/CD) process builds the Docker image and pushes the image to a public Docker registry.

How should the developer deploy the application to Elastic Beanstalk?

A.
Create a Dockerfile. Configure Elastic Beanstalk to build the application as a Docker image.
A.
Create a Dockerfile. Configure Elastic Beanstalk to build the application as a Docker image.
Answers
B.
Create a docker-compose.yml file. Use the Elastic Beanstalk CLI to deploy the application.
B.
Create a docker-compose.yml file. Use the Elastic Beanstalk CLI to deploy the application.
Answers
C.
Create a .zip file that contains the Docker image. Upload the .zip file to Elastic Beanstalk.
C.
Create a .zip file that contains the Docker image. Upload the .zip file to Elastic Beanstalk.
Answers
D.
Create a Dockerfile. Run the Elastic Beanstalk CLI eb local run command in the same directory.
D.
Create a Dockerfile. Run the Elastic Beanstalk CLI eb local run command in the same directory.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/docker.html#single-containerdocker.deploy-remoteDeploy a remote Docker image to Elastic BeanstalkAfter testing your container locally, deploy it to an Elastic Beanstalk environment. Elastic Beanstalkuses the docker-compose.yml file to pull and run your image if you are using Docker Compose. Otherwise, Elastic Beanstalk uses the Dockerrun.aws.json instead.

Use the EB CLI to create an environment and deploy your image.

~/remote-docker$ eb create environment-name

A team of developers is using an AWS CodePipeline pipeline as a continuous integration and continuous delivery (CI/CD) mechanism for a web application. A developer has written unit tests to programmatically test the functionality of the application code. The unit tests produce a test report that shows the results of each individual check. The developer now wants to run these tests automatically during the CI/CD process. Which solution will meet this requirement with the LEAST operational effort?

A.
Write a Git pre-commit hook that runs the tests before every commit. Ensure that each developer who is working on the project has the pre-commit hook installed locally. Review the test report and resolve any issues before pushing changes to AWS CodeCommit.
A.
Write a Git pre-commit hook that runs the tests before every commit. Ensure that each developer who is working on the project has the pre-commit hook installed locally. Review the test report and resolve any issues before pushing changes to AWS CodeCommit.
Answers
B.
Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage after the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
B.
Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage after the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
Answers
C.
Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage before the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
C.
Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage before the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
Answers
D.
Add a new stage to the pipeline. Use Jenkins as the provider. Configure CodePipeline to use Jenkins to run the unit tests. Write a Jenkinsfile that fails the stage if any test does not pass. Use the test report plugin for Jenkins to integrate the report with the Jenkins dashboard. View the test results in Jenkins. Resolve any issues.
D.
Add a new stage to the pipeline. Use Jenkins as the provider. Configure CodePipeline to use Jenkins to run the unit tests. Write a Jenkinsfile that fails the stage if any test does not pass. Use the test report plugin for Jenkins to integrate the report with the Jenkins dashboard. View the test results in Jenkins. Resolve any issues.
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/devops/test-reports-with-aws-codebuild/

A company uses the AWS SDK for JavaScript in the Browser to build a web application and then hosts the application on Amazon S3. The company wants the application to support 10,000 users concurrently. The company selects Amazon DynamoDB to store user preferences in a table. There is a requirement to uniquely identify users at any scale. Which solution will meet these requirements?

A.
Create a user cookie. Attach an 1AM role to the S3 bucket that hosts the application.
A.
Create a user cookie. Attach an 1AM role to the S3 bucket that hosts the application.
Answers
B.
Deploy an Amazon CloudFront distribution with an origin access identity (OAI) to access the S3 bucket.
B.
Deploy an Amazon CloudFront distribution with an origin access identity (OAI) to access the S3 bucket.
Answers
C.
Configure and use Amazon Cognito. Access DynamoDB with the authenticated users.
C.
Configure and use Amazon Cognito. Access DynamoDB with the authenticated users.
Answers
D.
Create an IAM user for each user. Use fine-grained access control on the DynamoDB table to control access.
D.
Create an IAM user for each user. Use fine-grained access control on the DynamoDB table to control access.
Answers
Suggested answer: C

Explanation:

This will allow the application to support 10,000 users concurrently and will provide a unique identifier for each user. By using Amazon Cognito, the company can authenticate users and then access DynamoDB with the authenticated users to store their preferences in a table. This approach will allow the company to control access to the DynamoDB table and to scale to any number of users. Creating a user cookie or deploying an Amazon CloudFront distribution with an OAI would not solve the problem because these solutions do not provide a way to uniquely identify users or control access to DynamoDB. Creating an IAM user for each user and using fine-grained access control on the DynamoDB table would not be practical or scalable because it would require the company to manage and maintain a large number of IAM users. When dealing with user profiles in serverless applications we often turn to Cognito for managing their credentials while the app itself will store user entities. https://www.sorenandersen.com/manage-user-profile-data-between-cognito-and-dynamodb/

A developer deploys an ecommerce application on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group. The EC2 instances are based on an Amazon Machine Image (AMI) that uses an Amazon Elastic Block Store (Amazon EBS) root volume. After deployment, the developer notices that a third of the instances seem to be idle. These instances are not receiving requests from the load balancer. The developer verifies that all the instances are registered with the load balancer. The developer must implement a solution to allow the EC2 instances to receive requests from the load balancer.

Which action will meet this requirement?

A.
Reregister the failed instances with the ALB.
A.
Reregister the failed instances with the ALB.
Answers
B.
Enable all Availability Zones for the ALB.
B.
Enable all Availability Zones for the ALB.
Answers
C.
Use the instance refresh feature to redeploy the EC2 Auto Scaling group.
C.
Use the instance refresh feature to redeploy the EC2 Auto Scaling group.
Answers
D.
Restart the EC2 instances that are not receiving traffic.
D.
Restart the EC2 instances that are not receiving traffic.
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/compute/introducing-instance-refresh-for-ec2-auto-scaling/

A company is developing a microservice that will manage customer account data in an Amazon DynamoDB table. Insert, update, and delete requests will be rare. Read traffic will be heavy. The company must have the ability to access customer data quickly by using a customer ID. The microservice can tolerate stale data.

Which solution will meet these requirements with the FEWEST possible read capacity units (RCUs)?

A.
Read the table by using eventually consistent reads.
A.
Read the table by using eventually consistent reads.
Answers
B.
Read the table by using strongly consistent reads.
B.
Read the table by using strongly consistent reads.
Answers
C.
Read the table by using transactional reads.
C.
Read the table by using transactional reads.
Answers
D.
Read the table by using strongly consistent PartiQL queries.
D.
Read the table by using strongly consistent PartiQL queries.
Answers
Suggested answer: A

Explanation:

Key points: "Read heavy", "access data quickly", "can tolerate stale data" To achieve: "FEWEST" possible (RCUs) For items up to 4 KB in size, one RCU can perform one strongly consistent read request per second. For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second. Transactional read requests require two RCUs to perform one read per second for items up to 4 KB.

For example, a strongly consistent read of an 8 KB item would require two RCUs, an eventually consistent read of an 8 KB item would require one RCU, and a transactional read of an 8 KB item would require four RCUs. https://aws.amazon.com/dynamodb/pricing/provisioned/

Total 608 questions
Go to page: of 61