ExamGecko
Home Home / Amazon / DVA-C02

Amazon DVA-C02 Practice Test - Questions Answers, Page 12

Question list
Search
Search

List of questions

Search

Related questions











A developer is troubleshooting an application mat uses Amazon DynamoDB in the uswest-2 Region.

The application is deployed to an Amazon EC2 instance. The application requires read-only permissions to a table that is named Cars The EC2 instance has an attached IAM role that contains the following IAM policy.

When the application tries to read from the Cars table, an Access Denied error occurs.

How can the developer resolve this error?

A.
Modify the IAM policy resource to be "arn aws dynamo* us-west-2 account-id table/*"
A.
Modify the IAM policy resource to be "arn aws dynamo* us-west-2 account-id table/*"
Answers
B.
Modify the IAM policy to include the dynamodb * action
B.
Modify the IAM policy to include the dynamodb * action
Answers
C.
Create a trust policy that specifies the EC2 service principal. Associate the role with the policy.
C.
Create a trust policy that specifies the EC2 service principal. Associate the role with the policy.
Answers
D.
Create a trust relationship between the role and dynamodb Amazonas com.
D.
Create a trust relationship between the role and dynamodb Amazonas com.
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/access-controloverview.html#access-control-resource-ownership

A developer needs to store configuration variables for an application. The developer needs to set an expiration date and time for me configuration. The developer wants to receive notifications. Before the configuration expires. Which solution will meet these requirements with the LEAST operational overhead?

A.
Create a standard parameter in AWS Systems Manager Parameter Store Set Expiation and Expiration Notification policy types.
A.
Create a standard parameter in AWS Systems Manager Parameter Store Set Expiation and Expiration Notification policy types.
Answers
B.
Create a standard parameter in AWS Systems Manager Parameter Store Create an AWS Lambda function to expire the configuration and to send Amazon Simple Notification Service (Amazon SNS) notifications.
B.
Create a standard parameter in AWS Systems Manager Parameter Store Create an AWS Lambda function to expire the configuration and to send Amazon Simple Notification Service (Amazon SNS) notifications.
Answers
C.
Create an advanced parameter in AWS Systems Manager Parameter Store Set Expiration and Expiration Notification policy types.
C.
Create an advanced parameter in AWS Systems Manager Parameter Store Set Expiration and Expiration Notification policy types.
Answers
D.
Create an advanced parameter in AWS Systems Manager Parameter Store Create an Amazon EC2 instance with a corn job to expire the configuration and to send notifications.
D.
Create an advanced parameter in AWS Systems Manager Parameter Store Create an Amazon EC2 instance with a corn job to expire the configuration and to send notifications.
Answers
Suggested answer: C

Explanation:

This solution will meet the requirements by creating an advanced parameter in AWS Systems Manager Parameter Store, which is a secure and scalable service for storing and managing configuration data and secrets. The advanced parameter allows setting expiration and expiration notification policy types, which enable specifying an expiration date and time for the configuration and receiving notifications before the configuration expires. The Lambda code will be refactored to load the Root CA Cert from the parameter store and modify the runtime trust store outside the Lambda function handler, which will improve performance and reduce latency by avoiding repeated calls to Parameter Store and trust store modifications for each invocation of the Lambda function.

Option A is not optimal because it will create a standard parameter in AWS Systems Manager Parameter Store, which does not support expiration and expiration notification policy types. Option B is not optimal because it will create a secret access key and access key ID with permission to access the S3 bucket, which will introduce additional security risks and complexity for storing and managing credentials. Option D is not optimal because it will create a Docker container from Node.js base image to invoke Lambda functions, which will incur additional costs and overhead for creating and running Docker containers.

Reference: AWS Systems Manager Parameter Store, [Using SSL/TLS to Encrypt a Connection to a DB Instance]

When using the AWS Encryption SDK how does the developer keep track of the data encryption keys used to encrypt data?

A.
The developer must manually keep Hack of the data encryption keys used for each data object.
A.
The developer must manually keep Hack of the data encryption keys used for each data object.
Answers
B.
The SDK encrypts me data encryption key and stores it (encrypted) as part of the resumed ophertext.
B.
The SDK encrypts me data encryption key and stores it (encrypted) as part of the resumed ophertext.
Answers
C.
The SDK stores the data encryption keys automaticity in Amazon S3.
C.
The SDK stores the data encryption keys automaticity in Amazon S3.
Answers
D.
The data encryption key is stored m the user data for the EC2 instance.
D.
The data encryption key is stored m the user data for the EC2 instance.
Answers
Suggested answer: B

Explanation:

This solution will meet the requirements by using AWS Encryption SDK, which is a client-side encryption library that enables developers to encrypt and decrypt data using data encryption keys that are protected by AWS Key Management Service (AWS KMS). The SDK encrypts the data encryption key with a customer master key (CMK) that is managed by AWS KMS, and stores it (encrypted) as part of the returned ciphertext. The developer does not need to keep track of the data encryption keys used to encrypt data, as they are stored with the encrypted data and can be retrieved and decrypted by using AWS KMS when needed. Option A is not optimal because it will require manual tracking of the data encryption keys used for each data object, which is error-prone and inefficient. Option C is not optimal because it will store the data encryption keys automatically in Amazon S3, which is unnecessary and insecure as Amazon S3 is not designed for storing encryption keys. Option D is not optimal because it will store the data encryption key in the user data for the EC2 instance, which is also unnecessary and insecure as user data is not encrypted by default.

Reference: [AWS Encryption SDK], [AWS Key Management Service]

An application that runs on AWS Lambda requires access to specific highly confidential objects in an

Amazon S3 bucket. In accordance with the principle of least privilege a company grants access to the S3 bucket by using only temporary credentials.

How can a developer configure access to me S3 bucket in the MOST secure way?

A.
Hardcode the credentials that are required to access the S3 objects in the application code. Use the credentials to access me required S3 objects.
A.
Hardcode the credentials that are required to access the S3 objects in the application code. Use the credentials to access me required S3 objects.
Answers
B.
Create a secret access key and access key ID with permission to access the S3 bucket. Store the key and key ID in AWS Secrets Manager. Configure the application to retrieve the Secrets Manager secret and use the credentials to access me S3 objects.
B.
Create a secret access key and access key ID with permission to access the S3 bucket. Store the key and key ID in AWS Secrets Manager. Configure the application to retrieve the Secrets Manager secret and use the credentials to access me S3 objects.
Answers
C.
Create a Lambda function execution role Attach a policy to the rote that grants access to specific objects in the S3 bucket.
C.
Create a Lambda function execution role Attach a policy to the rote that grants access to specific objects in the S3 bucket.
Answers
D.
Create a secret access key and access key ID with permission to access the S3 bucket Store the key and key ID as environment variables m Lambda. Use the environment variables to access the required S3 objects.
D.
Create a secret access key and access key ID with permission to access the S3 bucket Store the key and key ID as environment variables m Lambda. Use the environment variables to access the required S3 objects.
Answers
Suggested answer: C

Explanation:

This solution will meet the requirements by creating a Lambda function execution role, which is an IAM role that grants permissions to a Lambda function to access AWS resources such as Amazon S3 objects. The developer can attach a policy to the role that grants access to specific objects in the S3 bucket that are required by the application, following the principle of least privilege. Option A is not optimal because it will hardcode the credentials that are required to access S3 objects in the application code, which is insecure and difficult to maintain. Option B is not optimal because it will create a secret access key and access key ID with permission to access the S3 bucket, which will introduce additional security risks and complexity for storing and managing credentials. Option D is not optimal because it will store the secret access key and access key ID as environment variables in Lambda, which is also insecure and difficult to maintain.

Reference: [AWS Lambda Execution Role], [Using AWS Lambda with Amazon S3]

A developer has code that is stored in an Amazon S3 bucket. The code must be deployed as an AWS Lambda function across multiple accounts in the same AWS Region as the S3 bucket an AWS CloudPormation template that runs for each account will deploy the Lambda function.

What is the MOST secure way to allow CloudFormaton to access the Lambda Code in the S3 bucket?

A.
Grant the CloudFormation service role the S3 ListBucket and GetObject permissions. Add a bucket policy to Amazon S3 with the principal of "AWS" (account numbers)
A.
Grant the CloudFormation service role the S3 ListBucket and GetObject permissions. Add a bucket policy to Amazon S3 with the principal of "AWS" (account numbers)
Answers
B.
Grant the CloudFormation service row the S3 GetObfect permission. Add a Bucket policy to Amazon S3 with the principal of "'"
B.
Grant the CloudFormation service row the S3 GetObfect permission. Add a Bucket policy to Amazon S3 with the principal of "'"
Answers
C.
Use a service-based link to grant the Lambda function the S3 ListBucket and GetObject permissions by explicitly adding the S3 bucket's account number in the resource.
C.
Use a service-based link to grant the Lambda function the S3 ListBucket and GetObject permissions by explicitly adding the S3 bucket's account number in the resource.
Answers
D.
Use a service-based link to grant the Lambda function the S3 GetObject permission Add a resource of "** to allow access to the S3 bucket.
D.
Use a service-based link to grant the Lambda function the S3 GetObject permission Add a resource of "** to allow access to the S3 bucket.
Answers
Suggested answer: B

Explanation:

This solution allows the CloudFormation service role to access the S3 bucket from any account, as long as it has the S3 GetObject permission. The bucket policy grants access to any principal with the GetObject permission, which is the least privilege needed to deploy the Lambda code. This is more secure than granting ListBucket permission, which is not required for deploying Lambda code, or using a service-based link, which is not supported for Lambda functions.

Reference: AWS CloudFormation Service Role, Using AWS Lambda with Amazon S3

A developer warns to add request validation to a production environment Amazon API Gateway API. The developer needs to test the changes before the API is deployed to the production environment. For the lest the developer will send test requests to the API through a testing tool.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Export the existing API to an OpenAPI file. Create a new API Import the OpenAPI file Modify the new API to add request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
A.
Export the existing API to an OpenAPI file. Create a new API Import the OpenAPI file Modify the new API to add request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
Answers
B.
Modify the existing API to add request validation. Deploy the updated API to a new API Gateway stage Perform the tests Deploy the updated API to the API Gateway production stage.
B.
Modify the existing API to add request validation. Deploy the updated API to a new API Gateway stage Perform the tests Deploy the updated API to the API Gateway production stage.
Answers
C.
Create a new API Add the necessary resources and methods including new request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
C.
Create a new API Add the necessary resources and methods including new request validation. Perform the tests Modify the existing API to add request validation. Deploy the existing API to production.
Answers
D.
Clone the exiting API Modify the new API lo add request validation. Perform the tests Modify the existing API to add request validation Deploy the existing API to production.
D.
Clone the exiting API Modify the new API lo add request validation. Perform the tests Modify the existing API to add request validation Deploy the existing API to production.
Answers
Suggested answer: D

Explanation:

This solution allows the developer to test the changes without affecting the production environment. Cloning an API creates a copy of the API definition that can be modified independently. The developer can then add request validation to the new API and test it using a testing tool. After verifying that the changes work as expected, the developer can apply the same changes to the existing API and deploy it to production.

Reference: Clone an API, [Enable Request Validation for an API in API Gateway]

A developer needs to deploy an application running on AWS Fargate using Amazon ECS The application has environment variables that must be passed to a container for the application to initialize.

How should the environment variables be passed to the container?

A.
Define an array that includes the environment variables under the environment parameter within the service definition.
A.
Define an array that includes the environment variables under the environment parameter within the service definition.
Answers
B.
Define an array that includes the environment variables under the environment parameter within the task definition.
B.
Define an array that includes the environment variables under the environment parameter within the task definition.
Answers
C.
Define an array that includes the environment variables under the entryPoint parameter within the task definition.
C.
Define an array that includes the environment variables under the entryPoint parameter within the task definition.
Answers
D.
Define an array that includes the environment variables under the entryPoint parameter within the service definition.
D.
Define an array that includes the environment variables under the entryPoint parameter within the service definition.
Answers
Suggested answer: B

Explanation:

This solution allows the environment variables to be passed to the container when it is launched by AWS Fargate using Amazon ECS. The task definition is a text file that describes one or more containers that form an application. It contains various parameters for configuring the containers, such as CPU and memory requirements, network mode, and environment variables. The environment parameter is an array of key-value pairs that specify environment variables to pass to a container. Defining an array that includes the environment variables under the entryPoint parameter within the task definition will not pass them to the container, but use them as command-line arguments for overriding the default entry point of a container. Defining an array that includes the environment variables under the environment or entryPoint parameter within the service definition will not pass them to the container, but cause an error because these parameters are not valid for a service definition.

Reference: [Task Definition Parameters], [Environment Variables]

A developer is storing sensitive data generated by an application in Amazon S3. The developer wants to encrypt the data at rest. A company policy requires an audit trail of when the AWS Key Management Service (AWS KMS) key was used and by whom.

Which encryption option will meet these requirements?

A.
Server-side encryption with Amazon S3 managed keys (SSE-S3)
A.
Server-side encryption with Amazon S3 managed keys (SSE-S3)
Answers
B.
Server-side encryption with AWS KMS managed keys (SSE-KMS}
B.
Server-side encryption with AWS KMS managed keys (SSE-KMS}
Answers
C.
Server-side encryption with customer-provided keys (SSE-C)
C.
Server-side encryption with customer-provided keys (SSE-C)
Answers
D.
Server-side encryption with self-managed keys
D.
Server-side encryption with self-managed keys
Answers
Suggested answer: B

Explanation:

This solution meets the requirements because it encrypts data at rest using AWS KMS keys and provides an audit trail of when and by whom they were used. Server-side encryption with AWS KMS managed keys (SSE-KMS) is a feature of Amazon S3 that encrypts data using keys that are managed by AWS KMS. When SSE-KMS is enabled for an S3 bucket or object, S3 requests AWS KMS to generate data keys and encrypts data using these keys. AWS KMS logs every use of its keys in AWS CloudTrail, which records all API calls to AWS KMS as events. These events include information such as who made the request, when it was made, and which key was used. The company policy can use CloudTrail logs to audit critical events related to their data encryption and access. Server-side encryption with Amazon S3 managed keys (SSE-S3) also encrypts data at rest using keys that are managed by S3, but does not provide an audit trail of key usage. Server-side encryption with customer-provided keys (SSE-C) and server-side encryption with self-managed keys also encrypt data at rest using keys that are provided or managed by customers, but do not provide an audit trail of key usage and require additional overhead for key management.

Reference: [Protecting Data Using Server-Side Encryption with AWS KMS-Managed Encryption Keys (SSE-KMS)], [Logging AWS KMS API calls with AWS CloudTrail]

A company has an ecommerce application. To track product reviews, the company's development team uses an Amazon DynamoDB table.

Every record includes the following

• A Review ID a 16-digrt universally unique identifier (UUID)

• A Product ID and User ID 16 digit UUlDs that reference other tables

• A Product Rating on a scale of 1-5

• An optional comment from the user

The table partition key is the Review ID. The most performed query against the table is to find the 10 reviews with the highest rating for a given product.

Which index will provide the FASTEST response for this query"?

A.
A global secondary index (GSl) with Product ID as the partition key and Product Rating as the sort key
A.
A global secondary index (GSl) with Product ID as the partition key and Product Rating as the sort key
Answers
B.
A global secondary index (GSl) with Product ID as the partition key and Review ID as the sort key
B.
A global secondary index (GSl) with Product ID as the partition key and Review ID as the sort key
Answers
C.
A local secondary index (LSI) with Product ID as the partition key and Product Rating as the sort key
C.
A local secondary index (LSI) with Product ID as the partition key and Product Rating as the sort key
Answers
D.
A local secondary index (LSI) with Review ID as the partition key and Product ID as the sort key
D.
A local secondary index (LSI) with Review ID as the partition key and Product ID as the sort key
Answers
Suggested answer: A

Explanation:

This solution allows the fastest response for the query because it enables the query to use a single partition key value (the Product ID) and a range of sort key values (the Product Rating) to find the matching items. A global secondary index (GSI) is an index that has a partition key and an optional sort key that are different from those on the base table. A GSI can be created at any time and can be queried or scanned independently of the base table. A local secondary index (LSI) is an index that has the same partition key as the base table, but a different sort key. An LSI can only be created when the base table is created and must be queried together with the base table partition key. Using a GSI with Product ID as the partition key and Review ID as the sort key will not allow the query to use a range of sort key values to find the highest ratings. Using an LSI with Product ID as the partition key and Product Rating as the sort key will not work because Product ID is not the partition key of the base table. Using an LSI with Review ID as the partition key and Product ID as the sort key will not allow the query to use a single partition key value to find the matching items.

Reference: [Global Secondary Indexes], [Querying]

A company needs to distribute firmware updates to its customers around the world.

Which service will allow easy and secure control of the access to the downloads at the lowest cost?

A.
Use Amazon CloudFront with signed URLs for Amazon S3.
A.
Use Amazon CloudFront with signed URLs for Amazon S3.
Answers
B.
Create a dedicated Amazon CloudFront Distribution for each customer.
B.
Create a dedicated Amazon CloudFront Distribution for each customer.
Answers
C.
Use Amazon CloudFront with AWS Lambda@Edge.
C.
Use Amazon CloudFront with AWS Lambda@Edge.
Answers
D.
Use Amazon API Gateway and AWS Lambda to control access to an S3 bucket.
D.
Use Amazon API Gateway and AWS Lambda to control access to an S3 bucket.
Answers
Suggested answer: A

Explanation:

This solution allows easy and secure control of access to the downloads at the lowest cost because it uses a content delivery network (CDN) that can cache and distribute firmware updates to customers around the world, and uses a mechanism that can restrict access to specific files or versions. Amazon CloudFront is a CDN that can improve performance, availability, and security of web applications by delivering content from edge locations closer to customers. Amazon S3 is a storage service that can store firmware updates in buckets and objects. Signed URLs are URLs that include additional information, such as an expiration date and time, that give users temporary access to specific objects in S3 buckets. The developer can use CloudFront to serve firmware updates from S3 buckets and use signed URLs to control who can download them and for how long. Creating a dedicated CloudFront distribution for each customer will incur unnecessary costs and complexity. Using Amazon CloudFront with AWS Lambda@Edge will require additional programming overhead to implement custom logic at the edge locations. Using Amazon API Gateway and AWS Lambda to control access to an S3 bucket will also require additional programming overhead and may not provide optimal performance or availability.

Reference: [Serving Private Content through CloudFront], [Using CloudFront with Amazon S3]

Total 292 questions
Go to page: of 30