ExamGecko
Home Home / Amazon / DVA-C01

Amazon DVA-C01 Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











What are the steps to using the AWS CLI to launch a templatized serverless application?

A.
Use AWS CloudFormation get-template then CloudFormation execute-change-set.
A.
Use AWS CloudFormation get-template then CloudFormation execute-change-set.
Answers
B.
Use AWS CloudFormation validate-template then CloudFormation create-change-set.
B.
Use AWS CloudFormation validate-template then CloudFormation create-change-set.
Answers
C.
Use AWS CloudFormation package then CloudFormation deploy.
C.
Use AWS CloudFormation package then CloudFormation deploy.
Answers
D.
Use AWS CloudFormation create-stack then CloudFormation update-stack.
D.
Use AWS CloudFormation create-stack then CloudFormation update-stack.
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html

A Developer is creating a web application that requires authentication, but also needs to support guest access to provide users limited access without having to authenticate. What service can provide support for the application to allow guest access?

A.
IAM temporary credentials using AWS STS.
A.
IAM temporary credentials using AWS STS.
Answers
B.
Amazon Directory Service
B.
Amazon Directory Service
Answers
C.
Amazon Cognito with unauthenticated access enabled
C.
Amazon Cognito with unauthenticated access enabled
Answers
D.
IAM with SAML integration
D.
IAM with SAML integration
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverlessgetting-started-hello-world.html

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-clicommand-reference-sam-deploy.html https://docs.aws.amazon.com/serverless-applicationmodel/latest/developerguide/sam-cli-command-reference- sam-package.html

An application takes 40 seconds to process instructions received in an Amazon SQS message.

Assuming the SQS queue is configured with the default VisibilityTimeout value, what is the BEST way, upon receiving a message, to ensure that no other instances can retrieve a message that has already been processed or is currently being processed?

A.
Use the ChangeMessageVisibility API to increase the VisibilityTimeout, then use the DeleteMessage API to delete the message.
A.
Use the ChangeMessageVisibility API to increase the VisibilityTimeout, then use the DeleteMessage API to delete the message.
Answers
B.
Use the DeleteMessage API call to delete the message from the queue, then call DeleteQueue API to remove the queue.
B.
Use the DeleteMessage API call to delete the message from the queue, then call DeleteQueue API to remove the queue.
Answers
C.
Use the ChangeMessageVisibility API to decrease the timeout value, then use the DeleteMessage API to delete the message.
C.
Use the ChangeMessageVisibility API to decrease the timeout value, then use the DeleteMessage API to delete the message.
Answers
D.
Use the DeleteMessageVisibility API to cancel the VisibilityTimeout, then use the DeleteMessage API to delete the message.
D.
Use the DeleteMessageVisibility API to cancel the VisibilityTimeout, then use the DeleteMessage API to delete the message.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibilitytimeout.htmlIn SQS, messages remain there. It is the consumer’s responsibility to delete it, once consumed andprocessed.

A Developer has implemented a Lambda function that needs to add new customers to an RDS database that is expected to run hundreds of times per hour. The Lambda function is configured to use 512MB of RAM and is based on the following pseudo code:

After testing the Lambda function, the Developer notices that the Lambda execution time is much longer than expected. What should the Developer do to improve performance?

A.
Increase the amount of RAM allocated to the Lambda function, which will increase the number of threads the Lambda can use.
A.
Increase the amount of RAM allocated to the Lambda function, which will increase the number of threads the Lambda can use.
Answers
B.
Increase the size of the RDS database to allow for an increased number of database connections each hour.
B.
Increase the size of the RDS database to allow for an increased number of database connections each hour.
Answers
C.
Move the database connection and close statement out of the handler. Place the connection in the global space.
C.
Move the database connection and close statement out of the handler. Place the connection in the global space.
Answers
D.
Replace RDS wit Amazon DynamoDB to implement control over the number of writes per second.
D.
Replace RDS wit Amazon DynamoDB to implement control over the number of writes per second.
Answers
Suggested answer: C

Explanation:

Refer AWS documentation - Lambda Best Practices

Take advantage of Execution Context reuse to improve the performance of your function. Make sure any externalized configuration or dependencies that your code retrieves are stored and referenced locally after initial execution. Limit the re-initialization of variables/objects on every invocation.

Instead use static initialization/constructor, global/static variables and singletons. Keep alive and reuse connections (HTTP, database, etc.) that were established during a previous invocation.

A current architecture uses many Lambda functions invoking one another as a large state machine.

The coordination of this state machine is legacy custom code that breaks easily.

Which AWS Service can help refactor and manage the state machine?

A.
AWS Data Pipeline
A.
AWS Data Pipeline
Answers
B.
AWS SNS with AWS SQS
B.
AWS SNS with AWS SQS
Answers
C.
Amazon Elastic MapReduce
C.
Amazon Elastic MapReduce
Answers
D.
AWS Step Functions
D.
AWS Step Functions
Answers
Suggested answer: D

Explanation:

https://aws.amazon.com/step-functions/

A Developer is asked to implement a caching layer in front of Amazon RDS. Cached content is expensive to regenerate in case of service failure. Which implementation below would work while maintaining maximum uptime?

A.
Implement Amazon ElastiCache Redis in Cluster Mode
A.
Implement Amazon ElastiCache Redis in Cluster Mode
Answers
B.
Install Redis on an Amazon EC2 instance.
B.
Install Redis on an Amazon EC2 instance.
Answers
C.
Implement Amazon ElastiCache Memcached.
C.
Implement Amazon ElastiCache Memcached.
Answers
D.
Migrate the database to Amazon Redshift.
D.
Migrate the database to Amazon Redshift.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.html

A large e-commerce site is being designed to deliver static objects from Amazon S3. The Amazon S3 bucket wills server more than 300 GET requests per second. What should be done to optimize performance? (Select TWO.)

A.
Integrate Amazon CloudFront with Amazon S3.
A.
Integrate Amazon CloudFront with Amazon S3.
Answers
B.
Enable Amazon S3 cross-region replication.
B.
Enable Amazon S3 cross-region replication.
Answers
C.
Delete expired Amazon S3 server log files.
C.
Delete expired Amazon S3 server log files.
Answers
D.
Configure Amazon S3 lifecycle rules.
D.
Configure Amazon S3 lifecycle rules.
Answers
E.
Randomize Amazon S3 key name prefixes.
E.
Randomize Amazon S3 key name prefixes.
Answers
Suggested answer: A, E

Explanation:

CloudWatch definitely. Random key prefixes is still a valid method of improving performance by using parallel reads. It doesn't mention prefix hashing. For instance prefixes 1/,2/,3/,4,5/ could provide 5 x parallel streams for S3 as opposed to all objects being in a single folder/prefix e.g. dev/

https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html"There are no limits to the number of prefixes in a bucket. You can increase your read or writeperformance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket toparallelize reads, you could scale your read performance to 55,000 read requests per second." Theassumption that prefixes don't matter is incorrect, as described by "Amazon S3 performance guidelines recommended randomizing prefix naming with **hashed characters** to optimize performance for frequent data retrievals. You no longer have to randomize prefix naming for performance, and can use sequential date-based naming for your prefixes"

A company is building a stock trading application that requires sub-millisecond latency in processing trading requests. Amazon DynamoDB is used to store all the trading data that is used to process each request. After load testing the application, the development team found that due to data retrieval times, the latency requirement is not satisfied. Because of sudden high spikes in the number of requests, DynamoDB read capacity has to be significantly over-provisioned to avoid throttling.

What steps should be taken to meet latency requirements and reduce the cost of running the application?

A.
Add Global Secondary Indexes for trading data.
A.
Add Global Secondary Indexes for trading data.
Answers
B.
Store trading data in Amazon S3 and use Transfer Acceleration.
B.
Store trading data in Amazon S3 and use Transfer Acceleration.
Answers
C.
Add retries with exponential back-off for DynamoDB queries
C.
Add retries with exponential back-off for DynamoDB queries
Answers
D.
Use DynamoDB Accelerator to cache trading data.
D.
Use DynamoDB Accelerator to cache trading data.
Answers
Suggested answer: D

Explanation:

Refer AWS documentation - DynamoDB Accelerator

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, inmemory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management. Now you can focus on building great applications for your customers without worrying about performance at scale.

A Developer needs temporary access to resources in a second account.

What is the MOST secure way to achieve this?

A.
Use the Amazon Cognito user pools to get short-lived credentials for the second account.
A.
Use the Amazon Cognito user pools to get short-lived credentials for the second account.
Answers
B.
Create a dedicated IAM access key for the second account, and send it by mail.
B.
Create a dedicated IAM access key for the second account, and send it by mail.
Answers
C.
Create a cross-account access role, and use sts:AssumeRole API to get short-lived credentials.
C.
Create a cross-account access role, and use sts:AssumeRole API to get short-lived credentials.
Answers
D.
Establish trust, and add an SSH key for the second account to the IAM user.
D.
Establish trust, and add an SSH key for the second account to the IAM user.
Answers
Suggested answer: C

An application reads data from an Amazon DynamoDB table. Several times a day, for a period of 15 seconds, the application receives multiple ProvisionedThroughputExceeded errors. How should this exception be handled?

A.
Create a new global secondary index for the table to help with the additional requests.
A.
Create a new global secondary index for the table to help with the additional requests.
Answers
B.
Retry the failed read requests with exponential backoff.
B.
Retry the failed read requests with exponential backoff.
Answers
C.
Immediately retry the failed read requests.
C.
Immediately retry the failed read requests.
Answers
D.
Use the DynamoDB “UpdateItem” API to increase the provisioned throughput capacity of the table.
D.
Use the DynamoDB “UpdateItem” API to increase the provisioned throughput capacity of the table.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html

Total 608 questions
Go to page: of 61