ExamGecko
Home Home / Amazon / DVA-C02

Amazon DVA-C02 Practice Test - Questions Answers, Page 8

Question list
Search
Search

List of questions

Search

Related questions











A developer is incorporating AWS X-Ray into an application that handles personal identifiable information (PII). The application is hosted on Amazon EC2 instances. The application trace messages include encrypted PII and go to Amazon CloudWatch. The developer needs to ensure that no PII goes outside of the EC2 instances.

Which solution will meet these requirements?

A.
Manually instrument the X-Ray SDK in the application code.
A.
Manually instrument the X-Ray SDK in the application code.
Answers
B.
Use the X-Ray auto-instrumentation agent.
B.
Use the X-Ray auto-instrumentation agent.
Answers
C.
Use Amazon Macie to detect and hide PII. Call the X-Ray API from AWS Lambda.
C.
Use Amazon Macie to detect and hide PII. Call the X-Ray API from AWS Lambda.
Answers
D.
Use AWS Distro for Open Telemetry.
D.
Use AWS Distro for Open Telemetry.
Answers
Suggested answer: A

Explanation:

This solution will meet the requirements by allowing the developer to control what data is sent to XRay and CloudWatch from the application code. The developer can filter out any PII from the trace messages before sending them to X-Ray and CloudWatch, ensuring that no PII goes outside of the EC2 instances. Option B is not optimal because it will automatically instrument all incoming and outgoing requests from the application, which may include PII in the trace messages. Option C is not optimal because it will require additional services and costs to use Amazon Macie and AWS Lambda, which may not be able to detect and hide all PII from the trace messages. Option D is not optimal because it will use Open Telemetry instead of X-Ray, which may not be compatible with CloudWatch and other AWS services.

Reference: [AWS X-Ray SDKs]

A developer is migrating some features from a legacy monolithic application to use AWS Lambda functions instead. The application currently stores data in an Amazon Aurora DB cluster that runs in private subnets in a VPC. The AWS account has one VPC deployed. The Lambda functions and the DB cluster are deployed in the same AWS Region in the same AWS account.

The developer needs to ensure that the Lambda functions can securely access the DB cluster without crossing the public internet.

Which solution will meet these requirements?

A.
Configure the DB cluster's public access setting to Yes.
A.
Configure the DB cluster's public access setting to Yes.
Answers
B.
Configure an Amazon RDS database proxy for the Lambda functions.
B.
Configure an Amazon RDS database proxy for the Lambda functions.
Answers
C.
Configure a NAT gateway and a security group for the Lambda functions.
C.
Configure a NAT gateway and a security group for the Lambda functions.
Answers
D.
Configure the VPC, subnets, and a security group for the Lambda functions.
D.
Configure the VPC, subnets, and a security group for the Lambda functions.
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements by allowing the Lambda functions to access the DB cluster securely within the same VPC without crossing the public internet. The developer can configure a VPC endpoint for RDS in a private subnet and assign it to the Lambda functions. The developer can also configure a security group for the Lambda functions that allows inbound traffic from the DB cluster on port 3306 (MySQL). Option A is not optimal because it will expose the DB cluster to public access, which may compromise its security and data integrity. Option B is not optimal because it will introduce additional latency and complexity to use an RDS database proxy for accessing the DB cluster from Lambda functions within the same VPC. Option C is not optimal because it will require additional costs and configuration to use a NAT gateway for accessing resources in private subnets from Lambda functions.

Reference: [Configuring a Lambda Function to Access Resources in a VPC]

A developer is building a new application on AWS. The application uses an AWS Lambda function that retrieves information from an Amazon DynamoDB table. The developer hard coded the DynamoDB table name into the Lambda function code. The table name might change over time. The developer does not want to modify the Lambda code if the table name changes.

Which solution will meet these requirements MOST efficiently?

A.
Create a Lambda environment variable to store the table name. Use the standard method for the programming language to retrieve the variable.
A.
Create a Lambda environment variable to store the table name. Use the standard method for the programming language to retrieve the variable.
Answers
B.
Store the table name in a file. Store the file in the /tmp folder. Use the SDK for the programming language to retrieve the table name.
B.
Store the table name in a file. Store the file in the /tmp folder. Use the SDK for the programming language to retrieve the table name.
Answers
C.
Create a file to store the table name. Zip the file and upload the file to the Lambda layer. Use the SDK for the programming language to retrieve the table name.
C.
Create a file to store the table name. Zip the file and upload the file to the Lambda layer. Use the SDK for the programming language to retrieve the table name.
Answers
D.
Create a global variable that is outside the handler in the Lambda function to store the table name.
D.
Create a global variable that is outside the handler in the Lambda function to store the table name.
Answers
Suggested answer: A

Explanation:

The solution that will meet the requirements most efficiently is to create a Lambda environment variable to store the table name. Use the standard method for the programming language to retrieve the variable. This way, the developer can avoid hard-coding the table name in the Lambda function code and easily change the table name by updating the environment variable. The other options either involve storing the table name in a file, which is less efficient and secure than using an environment variable, or creating a global variable, which is not recommended as it can cause concurrency issues.

Reference: Using AWS Lambda environment variables

A company has installed smart motes in all Its customer locations. The smart meter's measure power usage at 1minute intervals and send the usage readings to a remote endpoint tot collection. The company needs to create an endpoint that will receive the smart meter readings and store the readings in a database. The company wants to store the location ID and timestamp information. The company wants to give Is customers low-latency access to their current usage and historical usage on demand The company expects demand to increase significantly. The solution must not impact performance or include downtime write seeing.

When solution will meet these requirements MOST cost-effectively?

A.
Store the smart meter readings in an Amazon RDS database. Create an index on the location ID and timestamp columns Use the columns to filter on the customers 'data.
A.
Store the smart meter readings in an Amazon RDS database. Create an index on the location ID and timestamp columns Use the columns to filter on the customers 'data.
Answers
B.
Store the smart motor readings m an Amazon DynamoDB table Croato a composite Key oy using the location ID and timestamp columns. Use the columns to filter on the customers' data.
B.
Store the smart motor readings m an Amazon DynamoDB table Croato a composite Key oy using the location ID and timestamp columns. Use the columns to filter on the customers' data.
Answers
C.
Store the smart meter readings in Amazon EastCache for Reds Create a Sorted set key y using the location ID and timestamp columns. Use the columns to filter on the customers' data.
C.
Store the smart meter readings in Amazon EastCache for Reds Create a Sorted set key y using the location ID and timestamp columns. Use the columns to filter on the customers' data.
Answers
D.
Store the smart meter readings m Amazon S3 Parton the data by using the location ID and timestamp columns. Use Amazon Athena lo tiler on me customers' data.
D.
Store the smart meter readings m Amazon S3 Parton the data by using the location ID and timestamp columns. Use Amazon Athena lo tiler on me customers' data.
Answers
Suggested answer: B

Explanation:

The solution that will meet the requirements most cost-effectively is to store the smart meter readings in an Amazon DynamoDB table. Create a composite key by using the location ID and timestamp columns. Use the columns to filter on the customers' data. This way, the company can leverage the scalability, performance, and low latency of DynamoDB to store and retrieve the smart meter readings. The company can also use the composite key to query the data by location ID and timestamp efficiently. The other options either involve more expensive or less scalable services, or do not provide low-latency access to the current usage.

Reference: Working with Queries in DynamoDB

A developer has created an AWS Lambda function that makes queries to an Amazon Aurora MySQL DB instance. When the developer performs a test the OB instance shows an error for too many connections.

Which solution will meet these requirements with the LEAST operational effort?

A.
Create a read replica for the DB instance Query the replica DB instance instead of the primary DB instance.
A.
Create a read replica for the DB instance Query the replica DB instance instead of the primary DB instance.
Answers
B.
Migrate the data lo an Amazon DynamoDB database.
B.
Migrate the data lo an Amazon DynamoDB database.
Answers
C.
Configure the Amazon Aurora MySQL DB instance tor Multi-AZ deployment.
C.
Configure the Amazon Aurora MySQL DB instance tor Multi-AZ deployment.
Answers
D.
Create a proxy in Amazon RDS Proxy Query the proxy instead of the DB instance.
D.
Create a proxy in Amazon RDS Proxy Query the proxy instead of the DB instance.
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements by using Amazon RDS Proxy, which is a fully managed, highly available database proxy for Amazon RDS that makes applications more scalable, more resilient to database failures, and more secure. The developer can create a proxy in Amazon RDS Proxy, which sits between the application and the DB instance and handles connection management, pooling, and routing. The developer can query the proxy instead of the DB instance, which reduces the number of open connections to the DB instance and avoids errors for too many connections.

Option A is not optimal because it will create a read replica for the DB instance, which may not solve the problem of too many connections as read replicas also have connection limits and may incur additional costs. Option B is not optimal because it will migrate the data to an Amazon DynamoDB database, which may introduce additional complexity and overhead for migrating and accessing data from a different database service. Option C is not optimal because it will configure the Amazon Aurora MySQL DB instance for Multi-AZ deployment, which may improve availability and durability of the DB instance but not reduce the number of connections.

Reference: [Amazon RDS Proxy], [Working with Amazon RDS Proxy]


A company is building an application for stock trading. The application needs sub-millisecond latency for processing trade requests. The company uses Amazon DynamoDB to store all the trading data that is used to process each trading request A development team performs load testing on the application and finds that the data retrieval time is higher than expected. The development team needs a solution that reduces the data retrieval time with the least possible effort.

Which solution meets these requirements'?

A.
Add local secondary indexes (LSis) for the trading data.
A.
Add local secondary indexes (LSis) for the trading data.
Answers
B.
Store the trading data m Amazon S3 and use S3 Transfer Acceleration.
B.
Store the trading data m Amazon S3 and use S3 Transfer Acceleration.
Answers
C.
Add retries with exponential back off for DynamoDB queries.
C.
Add retries with exponential back off for DynamoDB queries.
Answers
D.
Use DynamoDB Accelerator (DAX) to cache the trading data.
D.
Use DynamoDB Accelerator (DAX) to cache the trading data.
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements by using DynamoDB Accelerator (DAX), which is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10 times performance improvement - from milliseconds to microseconds - even at millions of requests per second. The developer can use DAX to cache the trading data that is used to process each trading request, which will reduce the data retrieval time with the least possible effort. Option A is not optimal because it will add local secondary indexes (LSIs) for the trading data, which may not improve the performance or reduce the latency of data retrieval, as LSIs are stored on the same partition as the base table and share the same provisioned throughput. Option B is not optimal because it will store the trading data in Amazon S3 and use S3 Transfer Acceleration, which is a feature that enables fast, easy, and secure transfers of files over long distances between S3 buckets and clients, not between DynamoDB and clients. Option C is not optimal because it will add retries with exponential backoff for DynamoDB queries, which is a strategy to handle transient errors by retrying failed requests with increasing delays, not by reducing data retrieval time.

Reference: [DynamoDB Accelerator (DAX)], [Local Secondary Indexes]

A developer is working on a Python application that runs on Amazon EC2 instances. The developer wants to enable tracing of application requests to debug performance issues in the code.

Which combination of actions should the developer take to achieve this goal? (Select TWO)

A.
Install the Amazon CloudWatch agent on the EC2 instances.
A.
Install the Amazon CloudWatch agent on the EC2 instances.
Answers
B.
Install the AWS X-Ray daemon on the EC2 instances.
B.
Install the AWS X-Ray daemon on the EC2 instances.
Answers
C.
Configure the application to write JSON-formatted togs to /var/log/cloudwatch.
C.
Configure the application to write JSON-formatted togs to /var/log/cloudwatch.
Answers
D.
Configure the application to write trace data to /Var/log-/xray.
D.
Configure the application to write trace data to /Var/log-/xray.
Answers
E.
Install and configure the AWS X-Ray SDK for Python in the application.
E.
Install and configure the AWS X-Ray SDK for Python in the application.
Answers
Suggested answer: B, E

Explanation:

This solution will meet the requirements by using AWS X-Ray to enable tracing of application requests to debug performance issues in the code. AWS X-Ray is a service that collects data about requests that the applications serve, and provides tools to view, filter, and gain insights into that data. The developer can install the AWS X-Ray daemon on the EC2 instances, which is a software that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the X-Ray API. The developer can also install and configure the AWS X-Ray SDK for Python in the application, which is a library that enables instrumenting Python code to generate and send trace data to the X-Ray daemon. Option A is not optimal because it will install the Amazon CloudWatch agent on the EC2 instances, which is a software that collects metrics and logs from EC2 instances and on-premises servers, not application performance data. Option C is not optimal because it will configure the application to write JSON-formatted logs to /var/log/cloudwatch, which is not a valid path or destination for CloudWatch logs. Option D is not optimal because it will configure the application to write trace data to /var/log/xray, which is also not a valid path or destination for X-Ray trace data.

Reference: [AWS X-Ray], [Running the X-Ray Daemon on Amazon EC2]

A company has an application that runs as a series of AWS Lambda functions. Each Lambda function receives data from an Amazon Simple Notification Service (Amazon SNS) topic and writes the data to an Amazon Aurora DB instance.

To comply with an information security policy, the company must ensure that the Lambda functions all use a single securely encrypted database connection string to access Aurora.

Which solution will meet these requirements'?

A.
Use IAM database authentication for Aurora to enable secure database connections for ail the Lambda functions.
A.
Use IAM database authentication for Aurora to enable secure database connections for ail the Lambda functions.
Answers
B.
Store the credentials and read the credentials from an encrypted Amazon RDS DB instance.
B.
Store the credentials and read the credentials from an encrypted Amazon RDS DB instance.
Answers
C.
Store the credentials in AWS Systems Manager Parameter Store as a secure string parameter.
C.
Store the credentials in AWS Systems Manager Parameter Store as a secure string parameter.
Answers
D.
Use Lambda environment variables with a shared AWS Key Management Service (AWS KMS) key for encryption.
D.
Use Lambda environment variables with a shared AWS Key Management Service (AWS KMS) key for encryption.
Answers
Suggested answer: A

Explanation:

This solution will meet the requirements by using IAM database authentication for Aurora, which enables using IAM roles or users to authenticate with Aurora databases instead of using passwords or other secrets. The developer can use IAM database authentication for Aurora to enable secure database connections for all the Lambda functions that access Aurora DB instance. The developer can create an IAM role with permission to connect to Aurora DB instance and attach it to each Lambda function. The developer can also configure Aurora DB instance to use IAM database authentication and enable encryption in transit using SSL certificates. This way, the Lambda functions can use a single securely encrypted database connection string to access Aurora without needing any secrets or passwords. Option B is not optimal because it will store the credentials and read them from an encrypted Amazon RDS DB instance, which may introduce additional costs and complexity for managing and accessing another RDS DB instance. Option C is not optimal because it will store the credentials in AWS Systems Manager Parameter Store as a secure string parameter, which may require additional steps or permissions to retrieve and decrypt the credentials from Parameter Store.

Option D is not optimal because it will use Lambda environment variables with a shared AWS Key Management Service (AWS KMS) key for encryption, which may not be secure or scalable as environment variables are stored as plain text unless encrypted with AWS KMS.

Reference: [IAM Database Authentication for MySQL and PostgreSQL], [Using SSL/TLS to Encrypt a Connection to a DB Instance]

A developer is troubleshooting an Amazon API Gateway API Clients are receiving HTTP 400 response errors when the clients try to access an endpoint of the API.

How can the developer determine the cause of these errors?

A.
Create an Amazon Kinesis Data Firehose delivery stream to receive API call logs from API Gateway.Configure Amazon CloudWatch Logs as the delivery stream's destination.
A.
Create an Amazon Kinesis Data Firehose delivery stream to receive API call logs from API Gateway.Configure Amazon CloudWatch Logs as the delivery stream's destination.
Answers
B.
Turn on AWS CloudTrail Insights and create a trail Specify the Amazon Resource Name (ARN) of the trail for the stage of the API.
B.
Turn on AWS CloudTrail Insights and create a trail Specify the Amazon Resource Name (ARN) of the trail for the stage of the API.
Answers
C.
Turn on AWS X-Ray for the API stage Create an Amazon CtoudWalch Logs log group Specify the Amazon Resource Name (ARN) of the log group for the API stage.
C.
Turn on AWS X-Ray for the API stage Create an Amazon CtoudWalch Logs log group Specify the Amazon Resource Name (ARN) of the log group for the API stage.
Answers
D.
Turn on execution logging and access logging in Amazon CloudWatch Logs for the API stage. Create a CloudWatch Logs log group. Specify the Amazon Resource Name (ARN) of the log group for the API stage.
D.
Turn on execution logging and access logging in Amazon CloudWatch Logs for the API stage. Create a CloudWatch Logs log group. Specify the Amazon Resource Name (ARN) of the log group for the API stage.
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements by using Amazon CloudWatch Logs to capture and analyze the logs from API Gateway. Amazon CloudWatch Logs is a service that monitors, stores, and accesses log files from AWS resources. The developer can turn on execution logging and access logging in Amazon CloudWatch Logs for the API stage, which enables logging information about API execution and client access to the API. The developer can create a CloudWatch Logs log group, which is a collection of log streams that share the same retention, monitoring, and access control settings. The developer can specify the Amazon Resource Name (ARN) of the log group for the API stage, which instructs API Gateway to send the logs to the specified log group. The developer can then examine the logs to determine the cause of the HTTP 400 response errors. Option A is not optimal because it will create an Amazon Kinesis Data Firehose delivery stream to receive API call logs from API Gateway, which may introduce additional costs and complexity for delivering and processing streaming data. Option B is not optimal because it will turn on AWS CloudTrail Insights and create a trail, which is a feature that helps identify and troubleshoot unusual API activity or operational issues, not HTTP response errors. Option C is not optimal because it will turn on AWS X-Ray for the API stage, which is a service that helps analyze and debug distributed applications, not HTTP response errors.

Reference: [Setting Up CloudWatch Logging for a REST API], [CloudWatch Logs Concepts]

A company developed an API application on AWS by using Amazon CloudFront. Amazon API Gateway, and AWS Lambd a. The API has a minimum of four requests every second A developer notices that many API users run the same query by using the POST method. The developer wants to cache the POST request to optimize the API resources.

Which solution will meet these requirements'?

A.
Configure the CloudFront cache Update the application to return cached content based upon the default request headers.
A.
Configure the CloudFront cache Update the application to return cached content based upon the default request headers.
Answers
B.
Override the cache method in me selected stage of API Gateway Select the POST method.
B.
Override the cache method in me selected stage of API Gateway Select the POST method.
Answers
C.
Save the latest request response in Lambda /tmp directory Update the Lambda function to check the /tmp directory
C.
Save the latest request response in Lambda /tmp directory Update the Lambda function to check the /tmp directory
Answers
D.
Save the latest request m AWS Systems Manager Parameter Store Modify the Lambda function to take the latest request response from Parameter Store
D.
Save the latest request m AWS Systems Manager Parameter Store Modify the Lambda function to take the latest request response from Parameter Store
Answers
Suggested answer: A

Explanation:

This solution will meet the requirements by using Amazon CloudFront, which is a content delivery network (CDN) service that speeds up the delivery of web content and APIs to end users. The developer can configure the CloudFront cache, which is a set of edge locations that store copies of popular or recently accessed content close to the viewers. The developer can also update the application to return cached content based upon the default request headers, which are a set of HTTP headers that CloudFront automatically forwards to the origin server and uses to determine whether an object in an edge location is still valid. By caching the POST requests, the developer can optimize the API resources and reduce the latency for repeated queries. Option B is not optimal because it will override the cache method in the selected stage of API Gateway, which is not possible or effective as API Gateway does not support caching for POST methods by default. Option C is not optimal because it will save the latest request response in Lambda /tmp directory, which is a local storage space that is available for each Lambda function invocation, not a cache that can be shared across multiple invocations or requests. Option D is not optimal because it will save the latest request in AWS Systems Manager Parameter Store, which is a service that provides secure and scalable storage for configuration data and secrets, not a cache for API responses.

Reference: [Amazon CloudFront], [Caching Content Based on Request Headers]

Total 292 questions
Go to page: of 30