ExamGecko
Home Home / Amazon / DVA-C02

Amazon DVA-C02 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











A developer is designing a serverless application for a game in which users register and log in through a web browser The application makes requests on behalf of users to a set of AWS Lambda functions that run behind an Amazon API Gateway HTTP API

The developer needs to implement a solution to register and log in users on the application's sign-in page. The solution must minimize operational overhead and must minimize ongoing management of user identities.

Which solution will meet these requirements'?

A.
Create Amazon Cognito user pools for external social identity providers Configure 1AM roles for the identity pools.
A.
Create Amazon Cognito user pools for external social identity providers Configure 1AM roles for the identity pools.
Answers
B.
Program the sign-in page to create users' 1AM groups with the 1AM roles attached to the groups
B.
Program the sign-in page to create users' 1AM groups with the 1AM roles attached to the groups
Answers
C.
Create an Amazon RDS for SQL Server DB instance to store the users and manage the permissions to the backend resources in AWS
C.
Create an Amazon RDS for SQL Server DB instance to store the users and manage the permissions to the backend resources in AWS
Answers
D.
Configure the sign-in page to register and store the users and their passwords in an Amazon DynamoDB table with an attached IAM policy.
D.
Configure the sign-in page to register and store the users and their passwords in an Amazon DynamoDB table with an attached IAM policy.
Answers
Suggested answer: A

Explanation:

Amazon Cognito User Pools:A managed user directory service, simplifying user registration and login.

Social Identity Providers:Cognito supports integration with external providers (e.g., Google, Facebook), reducing development effort.

IAM Roles for Authorization:Cognito-managed IAM roles grant fine-grained access to AWS resources (like Lambda functions).

Operational Overhead:Cognito minimizes the need to manage user identities and credentials independently.

Amazon Cognito Documentationhttps://docs.aws.amazon.com/cognito/

Cognito User Pools for Web Applications:https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-integration.html

A developer supports an application that accesses data in an Amazon DynamoDB table. One of the item attributes is expirationDate in the timestamp format. The application uses this attribute to find items, archive them, and remove them from the table based on the timestamp value

The application will be decommissioned soon, and the developer must find another way to implement this functionality. The developer needs a solution that will require the least amount of code to write.

Which solution will meet these requirements?

A.
Enable TTL on the expirationDate attribute in the table. Create a DynamoDB stream. Create an AWS Lambda function to process the deleted items. Create a DynamoDB trigger for the Lambda function.
A.
Enable TTL on the expirationDate attribute in the table. Create a DynamoDB stream. Create an AWS Lambda function to process the deleted items. Create a DynamoDB trigger for the Lambda function.
Answers
B.
Create two AWS Lambda functions one to delete the items and one to process the items Create a DynamoDB stream Use the Deleteltem API operation to delete the items based on the expirationDate attribute Use the GetRecords API operation to get the items from the DynamoDB stream and process them
B.
Create two AWS Lambda functions one to delete the items and one to process the items Create a DynamoDB stream Use the Deleteltem API operation to delete the items based on the expirationDate attribute Use the GetRecords API operation to get the items from the DynamoDB stream and process them
Answers
C.
Create two AWS Lambda functions, one to delete the items and one to process the items. Create an Amazon EventBndge scheduled rule to invoke the Lambda Functions Use the Deleteltem API operation to delete the items based on the expirationDate attribute. Use the GetRecords API operation to get the items from the DynamoDB table and process them.
C.
Create two AWS Lambda functions, one to delete the items and one to process the items. Create an Amazon EventBndge scheduled rule to invoke the Lambda Functions Use the Deleteltem API operation to delete the items based on the expirationDate attribute. Use the GetRecords API operation to get the items from the DynamoDB table and process them.
Answers
D.
Enable TTL on the expirationDate attribute in the table Specify an Amazon Simple Queue Service (Amazon SQS> dead-letter queue as the target to delete the items Create an AWS Lambda function to process the items
D.
Enable TTL on the expirationDate attribute in the table Specify an Amazon Simple Queue Service (Amazon SQS> dead-letter queue as the target to delete the items Create an AWS Lambda function to process the items
Answers
Suggested answer: A

Explanation:

TTL for Automatic Deletion:DynamoDB's Time-to-Live effortlessly deletes expired items without manual intervention.

DynamoDB Stream:Captures changes to the table, including deletions of expired items, triggering downstream actions.

Lambda for Processing:A Lambda function connected to the stream provides custom logic for handling the deleted items.

Code Efficiency:This solution leverages native DynamoDB features and stream-based processing, minimizing the need for custom code.

DynamoDB TTL Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

DynamoDB Streams Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html



A developer uses AWS CloudFormation to deploy an Amazon API Gateway API and an AWS Step Functions state machine The state machine must reference the API Gateway API after the CloudFormation template is deployed The developer needs a solution that uses the state machine to reference the API Gateway endpoint.

Which solution will meet these requirements MOST cost-effectively?

A.
Configure the CloudFormation template to reference the API endpoint in the DefinitionSubstitutions property for the AWS StepFunctions StateMachme resource.
A.
Configure the CloudFormation template to reference the API endpoint in the DefinitionSubstitutions property for the AWS StepFunctions StateMachme resource.
Answers
B.
Configure the CloudFormation template to store the API endpoint in an environment variable for the AWS::StepFunctions::StateMachine resourc Configure the state machine to reference the environment variable
B.
Configure the CloudFormation template to store the API endpoint in an environment variable for the AWS::StepFunctions::StateMachine resourc Configure the state machine to reference the environment variable
Answers
C.
Configure the CloudFormation template to store the API endpoint in a standard AWS: SecretsManager Secret resource Configure the state machine to reference the resource
C.
Configure the CloudFormation template to store the API endpoint in a standard AWS: SecretsManager Secret resource Configure the state machine to reference the resource
Answers
D.
Configure the CloudFormation template to store the API endpoint in a standard AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to reference the resource.
D.
Configure the CloudFormation template to store the API endpoint in a standard AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to reference the resource.
Answers
Suggested answer: A

Explanation:

CloudFormation and Dynamic

Reference:TheDefinitionSubstitutionsproperty in CloudFormation allows you to pass values into Step Functions state machines at runtime.

Cost-Effectiveness:This solution is cost-effective as it leverages CloudFormation's built-in capabilities, avoiding the need for additional services like Secrets Manager or AppConfig.

AWS Step Functions State Machine:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html

CloudFormation DefinitionSubstitutions:https://github.com/aws-cloudformation/aws-cloudformation-resource-providers-stepfunctions/issues/14

A developer created an AWS Lambda function that performs a series of operations that involve multiple AWS services. The function's duration time is higher than normal. To determine the cause of the issue, the developer must investigate traffic between the services without changing the function code

Which solution will meet these requirements?

A.
Enable AWS X-Ray active tracing in the Lambda function Review the logs in X-Ray
A.
Enable AWS X-Ray active tracing in the Lambda function Review the logs in X-Ray
Answers
B.
Configure AWS CloudTrail View the trail logs that are associated with the Lambda function.
B.
Configure AWS CloudTrail View the trail logs that are associated with the Lambda function.
Answers
C.
Review the AWS Config logs in Amazon Cloud Watch.
C.
Review the AWS Config logs in Amazon Cloud Watch.
Answers
D.
Review the Amazon CloudWatch logs that are associated with the Lambda function.
D.
Review the Amazon CloudWatch logs that are associated with the Lambda function.
Answers
Suggested answer: A

Explanation:

Tracing Distributed Systems:AWS X-Ray is designed to trace requests across services, helping identify bottlenecks in distributed applications like this one.

No Code Changes:Enabling X-Ray tracing often requires minimal code changes, meeting the requirement.

Identifying Bottlenecks: Analyzing X-Ray traces and logs will reveal latency in communications between different AWS services, leading to the high duration time.

AWS X-Ray:https://aws.amazon.com/xray/

X-Ray and Lambda:https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html

A developer designed an application on an Amazon EC2 instance The application makes API requests to objects in an Amazon S3 bucket

Which combination of steps will ensure that the application makes the API requests in the MOST secure manner? (Select TWO.)

A.
Create an IAM user that has permissions to the S3 bucket. Add the user to an 1AM group
A.
Create an IAM user that has permissions to the S3 bucket. Add the user to an 1AM group
Answers
B.
Create an IAM role that has permissions to the S3 bucket
B.
Create an IAM role that has permissions to the S3 bucket
Answers
C.
Add the IAM role to an instance profile. Attach the instance profile to the EC2 instance.
C.
Add the IAM role to an instance profile. Attach the instance profile to the EC2 instance.
Answers
D.
Create an 1AM role that has permissions to the S3 bucket Assign the role to an 1AM group
D.
Create an 1AM role that has permissions to the S3 bucket Assign the role to an 1AM group
Answers
E.
Store the credentials of the IAM user in the environment variables on the EC2 instance
E.
Store the credentials of the IAM user in the environment variables on the EC2 instance
Answers
Suggested answer: B, C

Explanation:

IAM Roles for EC2: IAM roles are the recommended way to provide AWS credentials to applications running on EC2 instances. Here's how this works:

You create an IAM role with the necessary permissions to access the target S3 bucket.

You create an instance profile and associate the IAM role with this profile.

When launching the EC2 instance, you attach this instance profile.

Temporary Security Credentials: When the application on the EC2 instance needs to access S3, it doesn't directly use access keys. Instead, the AWS SDK running on the instance retrieves temporary security credentials associated with the role. These are rotated automatically by AWS.

IAM Roles for Amazon EC2:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html

Temporary Security Credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

A developer is working on an ecommerce website The developer wants to review server logs without logging in to each of the application servers individually. The website runs on multiple Amazon EC2 instances, is written in Python, and needs to be highly available

How can the developer update the application to meet these requirements with MINIMUM changes?

A.
Rewrite the application to be cloud native and to run on AWS Lambda, where the logs can be reviewed in Amazon CloudWatch
A.
Rewrite the application to be cloud native and to run on AWS Lambda, where the logs can be reviewed in Amazon CloudWatch
Answers
B.
Set up centralized logging by using Amazon OpenSearch Service, Logstash, and OpenSearch Dashboards
B.
Set up centralized logging by using Amazon OpenSearch Service, Logstash, and OpenSearch Dashboards
Answers
C.
Scale down the application to one larger EC2 instance where only one instance is recording logs
C.
Scale down the application to one larger EC2 instance where only one instance is recording logs
Answers
D.
Install the unified Amazon CloudWatch agent on the EC2 instances Configure the agent to push the application logs to CloudWatch
D.
Install the unified Amazon CloudWatch agent on the EC2 instances Configure the agent to push the application logs to CloudWatch
Answers
Suggested answer: D

Explanation:

Centralized Logging Benefits: Centralized logging is essential for operational visibility in scalable systems, especially those using multiple EC2 instances like our e-commerce website. CloudWatch provides this capability, along with other monitoring features.

CloudWatch Agent: This is the best way to send custom application logs from EC2 instances to CloudWatch. Here's the process:

Install the CloudWatch agent on each EC2 instance.

Configure the agent with a configuration file, specifying:

Which log files to collect.

The format in which to send logs to CloudWatch (e.g., JSON).

The specific CloudWatch Logs log group and log stream for these logs.

Viewing and Analyzing Logs: Once the agent is pushing logs, use the CloudWatch Logs console or API:

View and search the logs across all instances.

Set up alarms based on log events.

Use CloudWatch Logs Insights for sophisticated queries and analysis.

Amazon CloudWatch Logs:https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

Unified CloudWatch Agent:https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

CloudWatch Logs Insights:https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html

A company has an existing application that has hardcoded database credentials A developer needs to modify the existing application The application is deployed in two AWS Regions with an active-passive failover configuration to meet company's disaster recovery strategy

The developer needs a solution to store the credentials outside the code. The solution must comply With the company's disaster recovery strategy

Which solution Will meet these requirements in the MOST secure way?

A.
Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region Update the application to use the Amazon Resource Name (ARN) based on the Region.
A.
Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region Update the application to use the Amazon Resource Name (ARN) based on the Region.
Answers
B.
Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
B.
Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
Answers
C.
Store credentials in a config file. Upload the config file to an S3 bucket in me primary Region. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary region. Update the application to access the config file from the S3 bucket based on the Region.
C.
Store credentials in a config file. Upload the config file to an S3 bucket in me primary Region. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary region. Update the application to access the config file from the S3 bucket based on the Region.
Answers
D.
Store credentials in a config file. Upload the config file to an Amazon Elastic File System (Amazon EFS) file system. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.
D.
Store credentials in a config file. Upload the config file to an Amazon Elastic File System (Amazon EFS) file system. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.
Answers
Suggested answer: A

Explanation:

AWS Secrets Manager is a service that allows you to store and manage secrets, such as database credentials, API keys, and passwords, in a secure and centralized way.It also provides features such as automatic secret rotation, auditing, and monitoring1. By using AWS Secrets Manager, you can avoid hardcoding credentials in your code, which is a bad security practice and makes it difficult to update them.You can also replicate your secrets to another Region, which is useful for disaster recovery purposes2. To access your secrets from your application, you can use the ARN of the secret, which is a unique identifier that includes the Region name.This way, your application can use the appropriate secret based on the Region where it is deployed3.

AWS Secrets Manager

Replicating and sharing secrets

Using your own encryption keys

A developer is creating an AWS Lambda function that searches for items from an Amazon DynamoDB table that contains customer contact information- The DynamoDB table items have the customer's email_address as the partition key and additional properties such as customer_type, name, and job_tltle.

The Lambda function runs whenever a user types a new character into the customer_type text input The developer wants the search to return partial matches of all the email_address property of a particular customer_type The developer does not want to recreate the DynamoDB table.

What should the developer do to meet these requirements?

A.
Add a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key Perform a query operation on the GSI by using the begvns_wth key condition expression With the emad_address property
A.
Add a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key Perform a query operation on the GSI by using the begvns_wth key condition expression With the emad_address property
Answers
B.
Add a global secondary index (GSI) to the DynamoDB table With ernail_address as the partition key and customer_type as the sort key Perform a query operation on the GSI by using the begins_wtth key condition expression With the emal_address property.
B.
Add a global secondary index (GSI) to the DynamoDB table With ernail_address as the partition key and customer_type as the sort key Perform a query operation on the GSI by using the begins_wtth key condition expression With the emal_address property.
Answers
C.
Add a local secondary index (LSI) to the DynamoDB table With customer_type as the partition key and email_address as the sort key Perform a query operation on the LSI by using the begins_wlth key condition expression With the email_address property
C.
Add a local secondary index (LSI) to the DynamoDB table With customer_type as the partition key and email_address as the sort key Perform a query operation on the LSI by using the begins_wlth key condition expression With the email_address property
Answers
D.
Add a local secondary Index (LSI) to the DynamoDB table With job_tltle as the partition key and emad_address as the sort key Perform a query operation on the LSI by using the begins_wrth key condition expression With the email_address property
D.
Add a local secondary Index (LSI) to the DynamoDB table With job_tltle as the partition key and emad_address as the sort key Perform a query operation on the LSI by using the begins_wrth key condition expression With the email_address property
Answers
Suggested answer: A

Explanation:

Understand the Problem: The existing DynamoDB table has email_address as the partition key. Searching by customer_type requires a different data access pattern. We need an efficient way to query for partial matches on email_address based on customer_type.

Why Global Secondary Index (GSI):

GSIs allow you to define a different partition key and sort key from the main table, enabling new query patterns.

In this case, havingcustomer_typeas the GSI's partition key lets you group all emails with the same customer type together.

Usingemail_addressas the sort key allows ordering within each customer type, facilitating the partial matching.

Querying the GSI:

You'll perform a query operation on the GSI, not the original table.

Use thebegins_withkey condition expression on the GSI's sort key (email_address) to find partial matches as the user types in thecustomer_typefield.

DynamoDB Global Secondary Indexes:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

DynamoDB Query Operation:[invalid URL removed]

A developer is deploying a company's application to Amazon EC2 instances The application generates gigabytes of data files each day The files are rarely accessed but the files must be available to the application's users within minutes of a request during the first year of storage The company must retain the files for 7 years.

How can the developer implement the application to meet these requirements MOST cost-effectively?

A.
Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class Create an S3 Lifecycle policy to transition the files to the S3 Glacier Deep Archive storage class after 1 year
A.
Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class Create an S3 Lifecycle policy to transition the files to the S3 Glacier Deep Archive storage class after 1 year
Answers
B.
Store the files in an Amazon S3 bucket. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
B.
Store the files in an Amazon S3 bucket. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
Answers
C.
Store the files on an Amazon Elastic Block Store (Amazon EBS) volume Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS volumes and to store those snapshots in Amazon S3
C.
Store the files on an Amazon Elastic Block Store (Amazon EBS) volume Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS volumes and to store those snapshots in Amazon S3
Answers
D.
Store the files on an Amazon Elastic File System (Amazon EFS) mount. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.
D.
Store the files on an Amazon Elastic File System (Amazon EFS) mount. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.
Answers
Suggested answer: A

Explanation:

Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/

Understanding Storage Requirements:

Files are large and infrequently accessed, but need to be available within minutes when requested in the first year.

Long-term (7-year) retention is required.

Cost-effectiveness is a top priority.

Why S3 Glacier Instant Retrieval:

Matches the retrieval requirements (access within minutes).

More cost-effective than S3 Standard for infrequently accessed data.

Simpler to use than traditional Glacier where retrievals take hours.

Why S3 Glacier Deep Archive:

Most cost-effective S3 storage class for long term archival.

Meets the 7-year retention requirement.

S3 Lifecycle Policy:

Automate the transition from Glacier Instant Retrieval to Glacier Deep Archive after one year.

Optimize costs by matching storage classes to access patterns.

Amazon S3 Storage Classes:https://aws.amazon.com/s3/storage-classes/

S3 Glacier Instant Retrieval:[invalid URL removed]

S3 Glacier Deep Archive:[invalid URL removed]

S3 Lifecycle Policies:https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

A developer is creating a serverless application that uses an AWS Lambda function The developer will use AWS CloudFormation to deploy the application The application will write logs to Amazon CloudWatch Logs The developer has created a log group in a CloudFormation template for the application to use The developer needs to modify the CloudFormation template to make the name of the log group available to the application at runtime

Which solution will meet this requirement?

A.
Use the AWS:lnclude transform in CloudFormation to provide the log group's name to the application
A.
Use the AWS:lnclude transform in CloudFormation to provide the log group's name to the application
Answers
B.
Pass the log group's name to the application in the user data section of the CloudFormation template.
B.
Pass the log group's name to the application in the user data section of the CloudFormation template.
Answers
C.
Use the CloudFormation template's Mappings section to specify the log group's name for the application.
C.
Use the CloudFormation template's Mappings section to specify the log group's name for the application.
Answers
D.
Pass the log group's Amazon Resource Name (ARN) as an environment variable to the Lambda function
D.
Pass the log group's Amazon Resource Name (ARN) as an environment variable to the Lambda function
Answers
Suggested answer: D

Explanation:

CloudFormation and Lambda Environment Variables:

CloudFormation is an excellent tool to manage infrastructure as code, including the log group resource.

Lambda functions can access environment variables at runtime, making them a suitable way to pass configuration information like the log group ARN.

CloudFormation Template Modification:

In your CloudFormation template, define the log group resource.

In the Lambda function resource, add anEnvironmentsection:

YAML

Environment:

Variables:

LOG_GROUP_ARN: !Ref LogGroupResourceName

Use codewith caution.

content_copy

The!Refintrinsic function retrieves the log group's ARN, which CloudFormation generates during stack creation.

Using the ARN in Your Lambda Function:

Within your Lambda code, access theLOG_GROUP_ARNenvironment variable.

Configure your logging library (e.g., Python'sloggingmodule) to send logs to the specified log group.

AWS Lambda Environment Variables:https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html

CloudFormation !Ref Intrinsic Function:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html

Total 292 questions
Go to page: of 30