ExamGecko
Home Home / Amazon / DVA-C02

Amazon DVA-C02 Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











A developer is designing an AWS Lambda function that creates temporary files that are less than 10 MB during invocation. The temporary files will be accessed and modified multiple times during invocation. The developer has no need to save or retrieve these files in the future.

Where should the temporary files be stored?

A.
the /tmp directory
A.
the /tmp directory
Answers
B.
Amazon Elastic File System (Amazon EFS)
B.
Amazon Elastic File System (Amazon EFS)
Answers
C.
Amazon Elastic Block Store (Amazon EBS)
C.
Amazon Elastic Block Store (Amazon EBS)
Answers
D.
Amazon S3
D.
Amazon S3
Answers
Suggested answer: A

Explanation:

AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda provides a local file system that can be used to store temporary files during invocation. The local file system is mounted under the /tmp directory and has a limit of 512 MB. The temporary files are accessible only by the Lambda function that created them and are deleted after the function execution ends. The developer can store temporary files that are less than 10 MB in the /tmp directory and access and modify them multiple times during invocation.

Reference:

[What Is AWS Lambda? - AWS Lambda]

[AWS Lambda Execution Environment - AWS Lambda]

A developer is designing a serverless application with two AWS Lambda functions to process photos. One Lambda function stores objects in an Amazon S3 bucket and stores the associated metadata in an Amazon DynamoDB table. The other Lambda function fetches the objects from the S3 bucket by using the metadata from the DynamoDB table. Both Lambda functions use the same Python library to perform complex computations and are approaching the quota for the maximum size of zipped deployment packages.

What should the developer do to reduce the size of the Lambda deployment packages with the LEAST operational overhead?

A.
Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.
A.
Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.
Answers
B.
Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.
B.
Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.
Answers
C.
Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.
C.
Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.
Answers
D.
Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.
D.
Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.
Answers
Suggested answer: B

Explanation:

AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda layers are a distribution mechanism for libraries, custom runtimes, and other dependencies. The developer can create a Lambda layer with the required Python library and use the layer in both Lambda functions. This will reduce the size of the Lambda deployment packages and avoid reaching the quota for the maximum size of zipped deployment packages. The developer can also benefit from using layers to manage dependencies separately from function code.

Reference:

[What Is AWS Lambda? - AWS Lambda]

[AWS Lambda Layers - AWS Lambda]

A developer is writing an AWS Lambda function. The developer wants to log key events that occur while the Lambda function runs. The developer wants to include a unique identifier to associate the events with a specific function invocation. The developer adds the following code to the Lambda function:

Which solution will meet this requirement?

A.
Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to standard output.
A.
Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to standard output.
Answers
B.
Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to a file.
B.
Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to a file.
Answers
C.
Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to standard output.
C.
Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to standard output.
Answers
D.
Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file.
D.
Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html

https://docs.aws.amazon.com/lambda/latest/dg/nodejs-logging.html

There is no explicit information for the runtime, the code is written in Node.js. AWS Lambda is a service that lets developers run code without provisioning or managing servers.

The developer can use the AWS request ID field in the context object to obtain a unique identifier for each function invocation. The developer can configure the application to write logs to standard output, which will be captured by Amazon CloudWatch Logs. This solution will meet the requirement of logging key events with a unique identifier.

Reference:

[What Is AWS Lambda? - AWS Lambda]

[AWS Lambda Function Handler in Node.js - AWS Lambda]

[Using Amazon CloudWatch - AWS Lambda]

A developer is working on a serverless application that needs to process any changes to an Amazon DynamoDB table with an AWS Lambda function.

How should the developer configure the Lambda function to detect changes to the DynamoDB table?

A.
Create an Amazon Kinesis data stream, and attach it to the DynamoDB table. Create a trigger to connect the data stream to the Lambda function.
A.
Create an Amazon Kinesis data stream, and attach it to the DynamoDB table. Create a trigger to connect the data stream to the Lambda function.
Answers
B.
Create an Amazon EventBridge rule to invoke the Lambda function on a regular schedule. Conned to the DynamoDB table from the Lambda function to detect changes.
B.
Create an Amazon EventBridge rule to invoke the Lambda function on a regular schedule. Conned to the DynamoDB table from the Lambda function to detect changes.
Answers
C.
Enable DynamoDB Streams on the table. Create a trigger to connect the DynamoDB stream to the Lambda function.
C.
Enable DynamoDB Streams on the table. Create a trigger to connect the DynamoDB stream to the Lambda function.
Answers
D.
Create an Amazon Kinesis Data Firehose delivery stream, and attach it to the DynamoDB table.Configure the delivery stream destination as the Lambda function.
D.
Create an Amazon Kinesis Data Firehose delivery stream, and attach it to the DynamoDB table.Configure the delivery stream destination as the Lambda function.
Answers
Suggested answer: C

Explanation:

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and consistent performance with seamless scalability. DynamoDB Streams is a feature that captures data modification events in DynamoDB tables. The developer can enable DynamoDB Streams on the table and create a trigger to connect the DynamoDB stream to the Lambda function. This solution will enable the Lambda function to detect changes to the DynamoDB table in near real time.

Reference:

[Amazon DynamoDB]

[DynamoDB Streams - Amazon DynamoDB]

[Using AWS Lambda with Amazon DynamoDB - AWS Lambda]

An application uses an Amazon EC2 Auto Scaling group. A developer notices that EC2 instances are taking a long time to become available during scale-out events. The UserData script is taking a long time to run.

The developer must implement a solution to decrease the time that elapses before an EC2 instance becomes available. The solution must make the most recent version of the application available at all times and must apply all available security updates. The solution also must minimize the number of images that are created. The images must be validated.

Which combination of steps should the developer take to meet these requirements? (Choose two.)

A.
Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
A.
Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
Answers
B.
Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install the latest version of the application and all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
B.
Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install the latest version of the application and all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
Answers
C.
Set up AWS CodeDeploy to deploy the most recent version of the application at runtime.
C.
Set up AWS CodeDeploy to deploy the most recent version of the application at runtime.
Answers
D.
Set up AWS CodePipeline to deploy the most recent version of the application at runtime.
D.
Set up AWS CodePipeline to deploy the most recent version of the application at runtime.
Answers
E.
Remove any commands that perform operating system patching from the UserData script.
E.
Remove any commands that perform operating system patching from the UserData script.
Answers
Suggested answer: B, E

Explanation:

AWS CloudFormation is a service that enables developers to model and provision AWS resources using templates. The developer can use the following steps to avoid accidental database deletion in the future:

Set up AWS CodeDeploy to deploy the most recent version of the application at runtime. This will ensure that the application code is always up to date and does not depend on the AMI.

Remove any commands that perform operating system patching from the UserData script. This will reduce the time that the UserData script takes to run and speed up the instance launch process.

Reference:

[What Is AWS CloudFormation? - AWS CloudFormation]

[What Is AWS CodeDeploy? - AWS CodeDeploy]

[Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud]

A developer is creating an AWS Lambda function that needs credentials to connect to an Amazon RDS for MySQL database. An Amazon S3 bucket currently stores the credentials. The developer needs to improve the existing solution by implementing credential rotation and secure storage. The developer also needs to provide integration with the Lambda function.

Which solution should the developer use to store and retrieve the credentials with the LEAST management overhead?

A.
Store the credentials in AWS Systems Manager Parameter Store. Select the database that the parameter will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the parameter. Enable automatic rotation for the parameter. Use the parameter from Parameter Store on the Lambda function to connect to the database.
A.
Store the credentials in AWS Systems Manager Parameter Store. Select the database that the parameter will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the parameter. Enable automatic rotation for the parameter. Use the parameter from Parameter Store on the Lambda function to connect to the database.
Answers
B.
Encrypt the credentials with the default AWS Key Management Service (AWS KMS) key. Store the credentials as environment variables for the Lambda function. Create a second Lambda function to generate new credentials and to rotate the credentials by updating the environment variables of the first Lambda function. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the database to use the new credentials. On the first Lambda function, retrieve the credentials from the environment variables. Decrypt the credentials by using AWS KMS, Connect to the database.
B.
Encrypt the credentials with the default AWS Key Management Service (AWS KMS) key. Store the credentials as environment variables for the Lambda function. Create a second Lambda function to generate new credentials and to rotate the credentials by updating the environment variables of the first Lambda function. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the database to use the new credentials. On the first Lambda function, retrieve the credentials from the environment variables. Decrypt the credentials by using AWS KMS, Connect to the database.
Answers
C.
Store the credentials in AWS Secrets Manager. Set the secret type to Credentials for Amazon RDS database. Select the database that the secret will access. Use the default AWS Key ManagementService (AWS KMS) key to encrypt the secret. Enable automatic rotation for the secret. Use the secret from Secrets Manager on the Lambda function to connect to the database.
C.
Store the credentials in AWS Secrets Manager. Set the secret type to Credentials for Amazon RDS database. Select the database that the secret will access. Use the default AWS Key ManagementService (AWS KMS) key to encrypt the secret. Enable automatic rotation for the secret. Use the secret from Secrets Manager on the Lambda function to connect to the database.
Answers
D.
Encrypt the credentials by using AWS Key Management Service (AWS KMS). Store the credentials in an Amazon DynamoDB table. Create a second Lambda function to rotate the credentials. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the DynamoDB table. Update the database to use the generated credentials. Retrieve the credentials from DynamoDB with the first Lambda function. Connect to the database.
D.
Encrypt the credentials by using AWS Key Management Service (AWS KMS). Store the credentials in an Amazon DynamoDB table. Create a second Lambda function to rotate the credentials. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the DynamoDB table. Update the database to use the generated credentials. Retrieve the credentials from DynamoDB with the first Lambda function. Connect to the database.
Answers
Suggested answer: C

Explanation:

AWS Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and IT resources. Secrets Manager enables you to store, retrieve, and rotate secrets such as database credentials, API keys, and passwords. Secrets Manager supports a secret type for RDS databases, which allows you to select an existing RDS database instance and generate credentials for it. Secrets Manager encrypts the secret using AWS Key Management Service (AWS KMS) keys and enables automatic rotation of the secret at a specified interval. A Lambda function can use the AWS SDK or CLI to retrieve the secret from Secrets Manager and use it to connect to the database.

Reference: Rotating your AWS Secrets Manager secrets

A developer has written the following IAM policy to provide access to an Amazon S3 bucket:

Which access does the policy allow regarding the s3:GetObject and s3:PutObject actions?

A.
Access on all buckets except the "DOC-EXAMPLE-BUCKET" bucket
A.
Access on all buckets except the "DOC-EXAMPLE-BUCKET" bucket
Answers
B.
Access on all buckets that start with "DOC-EXAMPLE-BUCKET" except the "DOC-EXAMPLEBUCKET/ secrets" bucket
B.
Access on all buckets that start with "DOC-EXAMPLE-BUCKET" except the "DOC-EXAMPLEBUCKET/ secrets" bucket
Answers
C.
Access on all objects in the "DOC-EXAMPLE-BUCKET" bucket along with access to all S3 actions for objects in the "DOC-EXAMPLE-BUCKET" bucket that start with "secrets"
C.
Access on all objects in the "DOC-EXAMPLE-BUCKET" bucket along with access to all S3 actions for objects in the "DOC-EXAMPLE-BUCKET" bucket that start with "secrets"
Answers
D.
Access on all objects in the "DOC-EXAMPLE-BUCKET" bucket except on objects that start with "secrets"
D.
Access on all objects in the "DOC-EXAMPLE-BUCKET" bucket except on objects that start with "secrets"
Answers
Suggested answer: D

Explanation:

The IAM policy shown in the image is a resource-based policy that grants or denies access to an S3 bucket based on certain conditions. The first statement allows access to any S3 action on any object in the "DOC-EXAMPLE-BUCKET" bucket when the request is made over HTTPS (the value of aws:SecureTransport is true). The second statement denies access to the s3:GetObject and s3:PutObject actions on any object in the "DOC-EXAMPLE-BUCKET/secrets" prefix when the request is made over HTTP (the value of aws:SecureTransport is false). Therefore, the policy allows access on all objects in the "DOC-EXAMPLE-BUCKET" bucket except on objects that start with "secrets".

Reference: Using IAM policies for Amazon S3

A developer is creating a mobile app that calls a backend service by using an Amazon API Gateway REST API. For integration testing during the development phase, the developer wants to simulate different backend responses without invoking the backend service.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an AWS Lambda function. Use API Gateway proxy integration to return constant HTTP responses.
A.
Create an AWS Lambda function. Use API Gateway proxy integration to return constant HTTP responses.
Answers
B.
Create an Amazon EC2 instance that serves the backend REST API by using an AWS CloudFormation template.
B.
Create an Amazon EC2 instance that serves the backend REST API by using an AWS CloudFormation template.
Answers
C.
Customize the API Gateway stage to select a response type based on the request.
C.
Customize the API Gateway stage to select a response type based on the request.
Answers
D.
Use a request mapping template to select the mock integration response.
D.
Use a request mapping template to select the mock integration response.
Answers
Suggested answer: D

Explanation:

Amazon API Gateway supports mock integration responses, which are predefined responses that can be returned without sending requests to a backend service. Mock integration responses can be used for testing or prototyping purposes, or for simulating different backend responses based on certain conditions. A request mapping template can be used to select a mock integration response based on an expression that evaluates some aspects of the request, such as headers, query strings, or body content. This solution does not require any additional resources or code changes and has the least operational overhead. Reference: Set up mock integrations for an API Gateway REST API

https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html

A developer has a legacy application that is hosted on-premises. Other applications hosted on AWS depend on the on-premises application for proper functioning. In case of any application errors, the developer wants to be able to use Amazon CloudWatch to monitor and troubleshoot all applications from one place.

How can the developer accomplish this?

A.
Install an AWS SDK on the on-premises server to automatically send logs to CloudWatch.
A.
Install an AWS SDK on the on-premises server to automatically send logs to CloudWatch.
Answers
B.
Download the CloudWatch agent to the on-premises server. Configure the agent to use IAM user credentials with permissions for CloudWatch.
B.
Download the CloudWatch agent to the on-premises server. Configure the agent to use IAM user credentials with permissions for CloudWatch.
Answers
C.
Upload log files from the on-premises server to Amazon S3 and have CloudWatch read the files.
C.
Upload log files from the on-premises server to Amazon S3 and have CloudWatch read the files.
Answers
D.
Upload log files from the on-premises server to an Amazon EC2 instance and have the instance forward the logs to CloudWatch.
D.
Upload log files from the on-premises server to an Amazon EC2 instance and have the instance forward the logs to CloudWatch.
Answers
Suggested answer: B

Explanation:

Amazon CloudWatch is a service that monitors AWS resources and applications. The developer can use CloudWatch to monitor and troubleshoot all applications from one place. To do so, the developer needs to download the CloudWatch agent to the on-premises server and configure the agent to use IAM user credentials with permissions for CloudWatch. The agent will collect logs and metrics from the on-premises server and send them to CloudWatch.

Reference:

[What Is Amazon CloudWatch? - Amazon CloudWatch]

[Installing and Configuring the CloudWatch Agent - Amazon CloudWatch]

An Amazon Kinesis Data Firehose delivery stream is receiving customer data that contains personally identifiable information. A developer needs to remove pattern-based customer identifiers from the data and store the modified data in an Amazon S3 bucket.

What should the developer do to meet these requirements?

A.
Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.
A.
Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.
Answers
B.
Launch an Amazon EC2 instance. Set the EC2 instance as the destination of the delivery stream.Run an application on the EC2 instance to remove the customer identifiers. Store the transformed data in an Amazon S3 bucket.
B.
Launch an Amazon EC2 instance. Set the EC2 instance as the destination of the delivery stream.Run an application on the EC2 instance to remove the customer identifiers. Store the transformed data in an Amazon S3 bucket.
Answers
C.
Create an Amazon OpenSearch Service instance. Set the OpenSearch Service instance as the destination of the delivery stream. Use search and replace to remove the customer identifiers. Export the data to an Amazon S3 bucket.
C.
Create an Amazon OpenSearch Service instance. Set the OpenSearch Service instance as the destination of the delivery stream. Use search and replace to remove the customer identifiers. Export the data to an Amazon S3 bucket.
Answers
D.
Create an AWS Step Functions workflow to remove the customer identifiers. As the last step in the workflow, store the transformed data in an Amazon S3 bucket. Set the workflow as the destination of the delivery stream.
D.
Create an AWS Step Functions workflow to remove the customer identifiers. As the last step in the workflow, store the transformed data in an Amazon S3 bucket. Set the workflow as the destination of the delivery stream.
Answers
Suggested answer: A

Explanation:

Amazon Kinesis Data Firehose is a service that delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Amazon Kinesis Data Analytics.

The developer can implement Kinesis Data Firehose data transformation as an AWS Lambda function. The function can remove pattern-based customer identifiers from the data and return the modified data to Kinesis Data Firehose. The developer can set an Amazon S3 bucket as the destination of the delivery stream.

Reference:

[What Is Amazon Kinesis Data Firehose? - Amazon Kinesis Data Firehose]

[Data Transformation - Amazon Kinesis Data Firehose]

Total 292 questions
Go to page: of 30