ExamGecko
Home Home / Amazon / DVA-C01

Amazon DVA-C01 Practice Test - Questions Answers, Page 60

Question list
Search
Search

List of questions

Search

Related questions











A developer is managing an application that uploads user files to an Amazon S3 bucket named companybucket. The company wants to maintain copies of all the files uploaded by users for compliance purposes, while ensuring users still have access to the data through the application.

Which IAM permissions should be applied to users to ensure they can create but not remove files from the bucket?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: D

Explanation:

To accomplish: "can create but not remove files"

-- Need: "Put Object"

-- Don't need: "Delete Object"

https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html

A company is running its website on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group. A developer needs to secure the internet-facing connection with HTTPS. The developer uses AWS Certificate Manager (ACM) to issue an X.509 certificate.

What should the developer do to secure the connection?

A.
Configure the ALB to use the X.509 certificate by using the AWS Management Console.
A.
Configure the ALB to use the X.509 certificate by using the AWS Management Console.
Answers
B.
Configure each EC2 instance to use the same X.509 certificate by using the AWS Management Console.
B.
Configure each EC2 instance to use the same X.509 certificate by using the AWS Management Console.
Answers
C.
Export the root key of the X.509 certificate to an Amazon S3 bucket. Configure each EC2 instance to use the same X.509 certificate from the S3 bucket.
C.
Export the root key of the X.509 certificate to an Amazon S3 bucket. Configure each EC2 instance to use the same X.509 certificate from the S3 bucket.
Answers
D.
Export the root key of the X.509 certificate to an Amazon S3 bucket. Configure the ALB to use the
D.
Export the root key of the X.509 certificate to an Amazon S3 bucket. Configure the ALB to use the
Answers
E.
509 certificate from the S3 bucket.
E.
509 certificate from the S3 bucket.
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/premiumsupport/knowledge-center/configure-acm-certificates-ec2/

https://aws.amazon.com/premiumsupport/knowledge-center/associate-acm-certificate-alb-nlb/

Configuring an Amazon Issued ACM public certificate for a website that's hosted on an EC2 instance requires exporting the certificate. However, you can't export the certificate because ACM manages the private key that signs and creates the certificate.

Instead, you can associate an ACM certificate with a load balancer or an ACM SSL/TLS certificate with a CloudFront distribution. Associate an ACM SSL certificate with an Application Load Balancer

Open the Amazon EC2 console.

In the navigation pane, choose Load Balancers, and then choose your Application Load Balancer.

Choose Add listener.

For Protocol, choose HTTPS.

For port, choose 443.

For Default action(s), choose Forward to, and then select your ALB target group from the dropdown list. For Default SSL certificate, choose From ACM (recommended) and then choose the ACM certificate.

Choose Save.

A developer deploys a custom application to three Amazon EC2 instances. The application processes messages from an Amazon Simple Queue Service (Amazon SQS) standard queue with default settings. When the developer runs a load test on the Amazon SQS queue, the developer discovers that the application processes many messages multiple times. How can the developer ensure that the application processes each message exactly once?

A.
Modify the SQS standard queue to an SQS FIFO queue.
A.
Modify the SQS standard queue to an SQS FIFO queue.
Answers
B.
Process the messages on one EC2 instance instead of three instances.
B.
Process the messages on one EC2 instance instead of three instances.
Answers
C.
Create a new SQS FIFO queue. Point the application to the new queue.
C.
Create a new SQS FIFO queue. Point the application to the new queue.
Answers
D.
Increase the DelaySeconds value on the current SQS queue.
D.
Increase the DelaySeconds value on the current SQS queue.
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queuesmoving.htmlMoving from a standard queue to a FIFO queue:

If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly. Note:

You can't convert an existing standard queue into a FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.

A company has a new application. The company needs to secure sensitive configuration data such as database connection strings, application license codes, and API keys that the application uses to access external resources. The company must track access to the configuration data for auditing purposes. The resources are managed outside the application. The company is not required to manage rotation of the connection strings, license codes, and API keys in the application. The company must implement a solution to securely store the configuration data and to give the application access to the configuration dat a. The solution must comply with security best practices.

Which solution will meet these requirements MOST cost-effectively?

A.
Store the configuration data in an encrypted file on the source code bundle. Grant the application access by using IAM policies.
A.
Store the configuration data in an encrypted file on the source code bundle. Grant the application access by using IAM policies.
Answers
B.
Store the configuration data in AWS Systems Manager Parameter Store. Grant the application access by using IAM policies.
B.
Store the configuration data in AWS Systems Manager Parameter Store. Grant the application access by using IAM policies.
Answers
C.
Store the configuration data on an Amazon Elastic Block Store (Amazon EBS) encrypted volume.Attach the EBS volume to an Amazon EC2 instance to provide the application with access to the data.
C.
Store the configuration data on an Amazon Elastic Block Store (Amazon EBS) encrypted volume.Attach the EBS volume to an Amazon EC2 instance to provide the application with access to the data.
Answers
D.
Store the configuration data in AWS Secrets Manager. Grant the application access by using IAM policies.
D.
Store the configuration data in AWS Secrets Manager. Grant the application access by using IAM policies.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store/

https://docs.aws.amazon.com/managedservices/latest/userguide/sys-man-param-store.html

AWS Systems Manager Parameter Store (AMS SSPS):

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values.

A business intelligence application runs on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Application-level audits require a searchable log of all API calls from users to the application. The application’s developers must store the logs centrally on AWS.

Which solution will meet these requirements?

A.
Install the Amazon CloudWatch agent on the Amazon EC2 host that runs Fargate.
A.
Install the Amazon CloudWatch agent on the Amazon EC2 host that runs Fargate.
Answers
B.
Configure the awslogs log driver in the ECS task definition.
B.
Configure the awslogs log driver in the ECS task definition.
Answers
C.
Configure AWS CloudTrail for the ECS containers.
C.
Configure AWS CloudTrail for the ECS containers.
Answers
D.
Install the ECS logs collector on the ECS hosts.
D.
Install the ECS logs collector on the ECS hosts.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Configuring the awslogs log driver in the ECS task definition will allow the application to store the logs centrally on AWS. The awslogs log driver sends logs to Amazon CloudWatch Logs, which is a managed service that provides search and analysis of log data. This solution will meet the requirements of storing the logs centrally on AWS and making them searchable. Installing the Amazon CloudWatch agent on the Amazon EC2 host or installing the ECS logs collector on the ECS hosts will not work because the application is running on AWS Fargate and not on Amazon EC2. AWS CloudTrail is not a suitable solution because it is used to record API calls made to AWS services, not application-level API calls.

A developer is deploying a company's application to Amazon EC2 instances. The application generates gigabytes of data files each day. The files are rarely accessed, but the files must be available to the application’s users within minutes of a request during the first year of storage. The company must retain the files for 7 years.

How can the developer implement the application to meet these requirements MOST costeffectively?

A.
Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Deep Archive storage class after 1 year.
A.
Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Deep Archive storage class after 1 year.
Answers
B.
Store the files in an Amazon S3 bucket. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
B.
Store the files in an Amazon S3 bucket. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
Answers
C.
Store the files on an Amazon Elastic Block Store (Amazon EBS) volume. Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS volumes and to store those snapshots in Amazon S3.
C.
Store the files on an Amazon Elastic Block Store (Amazon EBS) volume. Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS volumes and to store those snapshots in Amazon S3.
Answers
D.
Store the files on an Amazon Elastic File System (Amazon EFS) mount. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.
D.
Store the files on an Amazon Elastic File System (Amazon EFS) mount. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.
Answers
Suggested answer: A

Explanation:

Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard- Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/

A developer is testing a new file storage application that uses an Amazon CloudFront distribution to serve content from an Amazon S3 bucket. The distribution accesses the S3 bucket by using an origin access identity (OAI). The S3 bucket's permissions explicitly deny access to all other users.

The application prompts users to authenticate on a login page and then uses signed cookies to allow users to access their personal storage directories. The developer has configured the distribution to use its default cache behavior with restricted viewer access and has set the origin to point to the S3 bucket. However, when the developer tries to navigate to the login page, the developer receives a 403 Forbidden error.

The developer needs to implement a solution to allow unauthenticated access to the login page. The solution also must keep all private content secure.

Which solution will meet these requirements?

A.
Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior’s settings unchanged.
A.
Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior’s settings unchanged.
Answers
B.
Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to *, and make viewer access restricted. Change the default cache behavior's path pattern to the path of the login page, and make viewer access unrestricted.
B.
Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to *, and make viewer access restricted. Change the default cache behavior's path pattern to the path of the login page, and make viewer access unrestricted.
Answers
C.
Add a second origin as a failover origin to the default cache behavior. Point the failover origin to the S3 bucket. Set the path pattern for the primary origin to * and make viewer access restricted. Set the path pattern for the failover origin to the path of the login page, and make viewer access unrestricted.
C.
Add a second origin as a failover origin to the default cache behavior. Point the failover origin to the S3 bucket. Set the path pattern for the primary origin to * and make viewer access restricted. Set the path pattern for the failover origin to the path of the login page, and make viewer access unrestricted.
Answers
D.
Add a bucket policy to the S3 bucket to allow read access. Set the resource on the policy to the Amazon Resource Name (ARN) of the login page object in the S3 bucket. Add a CloudFront function to the default cache behavior to redirect unauthorized requests to the login page’s S3 URI.
D.
Add a bucket policy to the S3 bucket to allow read access. Set the resource on the policy to the Amazon Resource Name (ARN) of the login page object in the S3 bucket. Add a CloudFront function to the default cache behavior to redirect unauthorized requests to the login page’s S3 URI.
Answers
Suggested answer: B

Explanation:

Adding a second cache behavior to the distribution with the same origin as the default cache behavior and setting the path pattern to * will allow access to all files in the S3 bucket. Changing the default cache behavior's path pattern to the path of the login page and making viewer access unrestricted will allow unauthenticated users to access the login page, while keeping all other private content secure.

A developer creates a web service that performs many critical activities. The web service code uses an AWS SDK to publish noncritical metrics to Amazon CloudWatch by using the PutMetricData API. The web service must return results to the caller as quickly as possible. The response data from the PutMetricData API is not necessary to create the web service response. Which solution will MOST improve the response time of the web service?

A.
Upgrade to the latest version of the AWS SDK.
A.
Upgrade to the latest version of the AWS SDK.
Answers
B.
Call the PutMetricData API in a background thread.
B.
Call the PutMetricData API in a background thread.
Answers
C.
Use the AWS SDK to perform a synchronous call to an AWS Lambda function. Call the PutMetricData API within the Lambda function.
C.
Use the AWS SDK to perform a synchronous call to an AWS Lambda function. Call the PutMetricData API within the Lambda function.
Answers
D.
Send metric data to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function with the queue as the event source. Call the PutMetricData API within the Lambda function.
D.
Send metric data to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function with the queue as the event source. Call the PutMetricData API within the Lambda function.
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-api

A developer is designing a serverless application that customers use to select seats for a concert venue. Customers send the ticket requests to an Amazon API Gateway API with an AWS Lambda function that acknowledges the order and generates an order ID. The application includes two additional Lambda functions: one for inventory management and one for payment processing. These two Lambda functions run in parallel and write the order to an Amazon Dynamo DB table.

The application must provide seats to customers according to the following requirements. If a seat is accidently sold more than once, the first order that the application received must get the seat. In these cases, the application must process the payment for only the first order. However, if the first order is rejected during payment processing, the second order must get the seat. In these cases, the application must process the payment for the second order. Which solution will meet these requirements?

A.
Send the order ID to an Amazon Simple Notification Service (Amazon SNS) FIFO topic that fans out to one Amazon Simple Queue Service (Amazon SQS) FIFO queue for inventory management and another SQS FIFO queue for payment processing.
A.
Send the order ID to an Amazon Simple Notification Service (Amazon SNS) FIFO topic that fans out to one Amazon Simple Queue Service (Amazon SQS) FIFO queue for inventory management and another SQS FIFO queue for payment processing.
Answers
B.
Change the Lambda function that generates the order ID to initiate the Lambda function for inventory management. Then initiate the Lambda function for payment processing.
B.
Change the Lambda function that generates the order ID to initiate the Lambda function for inventory management. Then initiate the Lambda function for payment processing.
Answers
C.
Send the order ID to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the Lambda functions for inventory management and payment processing to the topic.
C.
Send the order ID to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the Lambda functions for inventory management and payment processing to the topic.
Answers
D.
Deliver the order ID to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda functions for inventory management and payment processing to poll the queue.
D.
Deliver the order ID to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda functions for inventory management and payment processing to poll the queue.
Answers
Suggested answer: A

Explanation:

Inventory & Payment functions are running in parallel. So going with Fanout option.

https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html

An open-source map application gathers data from several geolocation APIs. The application's source code repository is public and can be used by anyone, but the geolocation APIs must not be directly accessible. A developer must implement a solution to prevent the credentials that are used to access the APIs from becoming public. The solution also must ensure that the application still functions properly. Which solution will meet these requirements MOST cost-effectively?

A.
Store the credentials in AWS Secrets Manager. Retrieve the credentials by using the GetSecretValue API operation.
A.
Store the credentials in AWS Secrets Manager. Retrieve the credentials by using the GetSecretValue API operation.
Answers
B.
Store the credentials in AWS Key Management Service (AWS KMS). Retrieve the credentials by using the GetPublicKey API operation.
B.
Store the credentials in AWS Key Management Service (AWS KMS). Retrieve the credentials by using the GetPublicKey API operation.
Answers
C.
Store the credentials in AWS Security Token Service (AWS STS). Retrieve the credentials by using the GetCallerldentity API operation.
C.
Store the credentials in AWS Security Token Service (AWS STS). Retrieve the credentials by using the GetCallerldentity API operation.
Answers
D.
Store the credentials in AWS Systems Manager Parameter Store. Retrieve the credentials by using the GetParameter API operation.
D.
Store the credentials in AWS Systems Manager Parameter Store. Retrieve the credentials by using the GetParameter API operation.
Answers
Suggested answer: D

Explanation:

Secrets Manager: It is paid. The storage cost is $0.40 per secret per month and API interactions cost is $0.05 per 10,000 API calls. Parameter Store: For Standard parameters, No additional charge for storage and standard throughput. For higher throughput, API interactions cost is $0.05 per 10,000 API calls. For Advanced parameters, storage cost is $0.05 per advanced parameter per month and API interactions cost is $0.05 per 10,000 API calls. https://aws.amazon.com/systems-manager/pricing/

Total 608 questions
Go to page: of 61