ExamGecko
Home Home / Amazon / DVA-C02

Amazon DVA-C02 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

Related questions











A developer is creating a new REST API by using Amazon API Gateway and AWS Lambd

a. The development team tests the API and validates responses for the known use cases before deploying the API to the production environment.

The developer wants to make the REST API available for testing by using API Gateway locally.

Which AWS Serverless Application Model Command Line Interface (AWS SAM CLI) subcommand will meet these requirements?

A.
Sam local invoke
A.
Sam local invoke
Answers
B.
Sam local generate-event
B.
Sam local generate-event
Answers
C.
Sam local start-lambda
C.
Sam local start-lambda
Answers
D.
Sam local start-api
D.
Sam local start-api
Answers
Suggested answer: D

Explanation:

The sam local start-api subcommand allows you to run your serverless application locally for quick development and testing1.It creates a local HTTP server that acts as a proxy for API Gateway and invokes your Lambda functions based on the AWS SAM template1.You can use the sam local start-api subcommand to test your REST API locally by sending HTTP requests to the local endpoint1.

A developer is writing an application that will retrieve sensitive data from a third-party system. The application will format the data into a PDF file. The PDF file could be more than 1 MB. The application will encrypt the data to disk by using AWS Key Management Service (AWS KMS). The application will decrypt the file when a user requests to download it. The retrieval and formatting portions of the application are complete.

The developer needs to use the GenerateDataKey API to encrypt the PDF file so that the PDF file can be decrypted later. The developer needs to use an AWS KMS symmetric customer managed key for encryption.

Which solutions will meet these requirements?

A.
Write the encrypted key from the GenerateDataKey API to disk for later use. Use the plaintext key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
A.
Write the encrypted key from the GenerateDataKey API to disk for later use. Use the plaintext key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
Answers
B.
Write the plain text key from the GenerateDataKey API to disk for later use. Use the encrypted key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
B.
Write the plain text key from the GenerateDataKey API to disk for later use. Use the encrypted key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
Answers
C.
Write the encrypted key from the GenerateDataKey API to disk for later use. Use the plaintext key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API
C.
Write the encrypted key from the GenerateDataKey API to disk for later use. Use the plaintext key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API
Answers
D.
Write the plain text key from the GenerateDataKey API to disk for later use. Use the encrypted key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API
D.
Write the plain text key from the GenerateDataKey API to disk for later use. Use the encrypted key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API
Answers
Suggested answer: A

Explanation:

The GenerateDataKey API returns a data key that is encrypted under a symmetric encryption KMS key that you specify, and a plaintext copy of the same data key1.The data key is a random byte string that can be used with any standard encryption algorithm, such as AES or SM42.The plaintext data key can be used to encrypt or decrypt data outside of AWS KMS, while the encrypted data key can be stored with the encrypted data and later decrypted by AWS KMS1.

In this scenario, the developer needs to use the GenerateDataKey API to encrypt the PDF file so that it can be decrypted later. The developer also needs to use an AWS KMS symmetric customer managed key for encryption. To achieve this, the developer can follow these steps:

Call the GenerateDataKey API with the symmetric customer managed key ID and the desired length or specification of the data key. The API will return an encrypted data key and a plaintext data key.

Write the encrypted data key to disk for later use. This will allow the developer to decrypt the data key and the PDF file later by using AWS KMS.

Use the plaintext data key and a symmetric encryption algorithm to encrypt the PDF file. The developer can use any standard encryption library or tool to perform this operation, such as OpenSSL or AWS Encryption SDK.

Discard the plaintext data key from memory as soon as possible after using it. This will prevent unauthorized access or leakage of the data key.

A developer is optimizing an AWS Lambda function and wants to test the changes in production on a small percentage of all traffic. The Lambda function serves requests to a REST API in Amazon API Gateway. The developer needs to deploy their changes and perform a test in production without changing the API Gateway URL.

Which solution will meet these requirements?

A.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function version. Upload and publish the optimized Lambda function code. On the production API Gateway stage, define a canary release and set the percentage of traffic to direct to the canary release. Update the API Gateway endpoint to use the $LATEST version of the Lambda function. Publish the API to the canary stage.
A.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function version. Upload and publish the optimized Lambda function code. On the production API Gateway stage, define a canary release and set the percentage of traffic to direct to the canary release. Update the API Gateway endpoint to use the $LATEST version of the Lambda function. Publish the API to the canary stage.
Answers
B.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function version. Upload and publish the optimized Lambda function code. Update the API Gateway endpoint to use the $LATEST version of the Lambda function. Deploy a new API Gateway stage.
B.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function version. Upload and publish the optimized Lambda function code. Update the API Gateway endpoint to use the $LATEST version of the Lambda function. Deploy a new API Gateway stage.
Answers
C.
Define an alias on the $LATEST version of the Lambda function. Update the API Gateway endpoint to reference the new Lambda function alias. Upload and publish the optimized Lambda function code. On the production API Gateway stage, define a canary release and set the percentage of traffic to direct to the canary release. Update the API Gateway endpoint to use the SLAT EST version of the Lambda function. Publish to the canary stage.
C.
Define an alias on the $LATEST version of the Lambda function. Update the API Gateway endpoint to reference the new Lambda function alias. Upload and publish the optimized Lambda function code. On the production API Gateway stage, define a canary release and set the percentage of traffic to direct to the canary release. Update the API Gateway endpoint to use the SLAT EST version of the Lambda function. Publish to the canary stage.
Answers
D.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function Deploy the API to the production API Gateway stage.
D.
Define a function version for the currently deployed production Lambda function. Update the API Gateway endpoint to reference the new Lambda function Deploy the API to the production API Gateway stage.
Answers
Suggested answer: C

Explanation:

A Lambda alias is a pointer to a specific Lambda function version or another alias1.A Lambda alias allows you to invoke different versions of a function using the same name1.You can also split traffic between two aliases by assigning weights to them1.

In this scenario, the developer needs to test their changes in production on a small percentage of all traffic without changing the API Gateway URL. To achieve this, the developer can follow these steps:

Define an alias on the $LATEST version of the Lambda function. This will create a new alias that points to the latest code of the function.

Update the API Gateway endpoint to reference the new Lambda function alias. This will make the API Gateway invoke the alias instead of a specific version of the function.

Upload and publish the optimized Lambda function code. This will update the $LATEST version of the function with the new code.

On the production API Gateway stage, define a canary release and set the percentage of traffic to direct to the canary release.This will enable API Gateway to perform a canary deployment on a new API2.A canary deployment is a software development strategy in which a new version of an API is deployed for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage2.The canary release receives a small percentage of API traffic and the production release takes up the rest2.

Update the API Gateway endpoint to use the $LATEST version of the Lambda function. This will make the canary release invoke the latest code of the function, which contains the optimized changes.

Publish to the canary stage. This will deploy the changes to a subset of users for testing.

By using this solution, the developer can test their changes in production on a small percentage of all traffic without changing the API Gateway URL.The developer can also monitor and compare metrics between the canary and production releases, and promote or disable the canary as needed2.

A company has an application that stores data in Amazon RDS instances. The application periodically experiences surges of high traffic that cause performance problems.

During periods of peak traffic, a developer notices a reduction in query speed in all database queries.

The team's technical lead determines that a multi-threaded and scalable caching solution should be used to offload the heavy read traffic. The solution needs to improve performance.

Which solution will meet these requirements with the LEAST complexity?

A.
Use Amazon ElastiCache for Memcached to offload read requests from the main database.
A.
Use Amazon ElastiCache for Memcached to offload read requests from the main database.
Answers
B.
Replicate the data to Amazon DynamoDB. Set up a DynamoDB Accelerator (DAX) cluster.
B.
Replicate the data to Amazon DynamoDB. Set up a DynamoDB Accelerator (DAX) cluster.
Answers
C.
Configure the Amazon RDS instances to use Multi-AZ deployment with one standby instance. Offload read requests from the main database to the standby instance.
C.
Configure the Amazon RDS instances to use Multi-AZ deployment with one standby instance. Offload read requests from the main database to the standby instance.
Answers
D.
Use Amazon ElastiCache for Redis to offload read requests from the main database.
D.
Use Amazon ElastiCache for Redis to offload read requests from the main database.
Answers
Suggested answer: A

Explanation:

Amazon ElastiCache for Memcached is a fully managed, multithreaded, and scalable in-memorykey-value store that can be used to cache frequently accessed data and improve applicationperformance1. By using Amazon ElastiCache for Memcached, the developer can reduce theload on the main database and handle high traffic surges more efficiently.To use Amazon ElastiCache for Memcached, the developer needs to create a cache cluster withone or more nodes, and configure the application to store and retrieve data from the cachecluster2.The developer can use any of the supported Memcached clients to interact with thecache cluster3.The developer can also use Auto Discovery to dynamically discover and connectto all cache nodes in a cluster4.Amazon ElastiCache for Memcached is compatible with the Memcached protocol, which meansthat the developer can use existing tools and libraries that work with Memcached1. AmazonElastiCache for Memcached also supports data partitioning, which allows the developer todistribute data among multiple nodes and scale out the cache cluster as needed.Using Amazon ElastiCache for Memcached is a simple and effective solution that meets therequirements with the least complexity. The developer does not need to change the databaseschema, migrate data to a different service, or use a different caching model. The developercan leverage the existing Memcached ecosystem and easily integrate it with the application

A developer at a company needs to create a small application that makes the same API call once each day at a designated time. The company does not have infrastructure in the AWS Cloud yet, but the company wants to implement this functionality on AWS.

Which solution meets these requirements in the MOST operationally efficient manner?

A.
Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS).
A.
Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS).
Answers
B.
Use an Amazon Linux crontab scheduled job that runs on Amazon EC2.
B.
Use an Amazon Linux crontab scheduled job that runs on Amazon EC2.
Answers
C.
Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
C.
Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
Answers
D.
Use an AWS Batch job that is submitted to an AWS Batch job queue.
D.
Use an AWS Batch job that is submitted to an AWS Batch job queue.
Answers
Suggested answer: C

Explanation:

The correct answer is C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.

C) Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event. This is correct. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, and logging1. Amazon EventBridge is a serverless event bus service that enables you to connect your applications with data from a variety of sources2. EventBridge can create rules that run on a schedule, either at regular intervals or at specific times and dates, and invoke targets such as Lambda functions3. This solution meets the requirements of creating a small application that makes the same API call once each day at a designated time, without requiring any infrastructure in the AWS Cloud or any operational overhead.

A) Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS). This is incorrect. Amazon EKS is a fully managed Kubernetes service that allows you to run containerized applications on AWS4. Kubernetes cron jobs are tasks that run periodically on a given schedule5. This solution could meet the functional requirements of creating a small application that makes the same API call once each day at a designated time, but it would not be the most operationally efficient manner. The company would need to provision and manage an EKS cluster, which would incur additional costs and complexity.

B) Use an Amazon Linux crontab scheduled job that runs on Amazon EC2. This is incorrect. Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud6. Crontab is a Linux utility that allows you to schedule commands or scripts to run automatically at a specified time or date7. This solution could meet the functional requirements of creating a small application that makes the same API call once each day at a designated time, but it would not be the most operationally efficient manner. The company would need to provision and manage an EC2 instance, which would incur additional costs and complexity.

D) Use an AWS Batch job that is submitted to an AWS Batch job queue. This is incorrect. AWS Batch enables you to run batch computing workloads on the AWS Cloud8. Batch jobs are units of work that can be submitted to job queues, where they are executed in parallel or sequentially on compute environments9. This solution could meet the functional requirements of creating a small application that makes the same API call once each day at a designated time, but it would not be the most operationally efficient manner. The company would need to configure and manage an AWS Batch environment, which would incur additional costs and complexity.

1: What is AWS Lambda? - AWS Lambda

2: What is Amazon EventBridge? - Amazon EventBridge

3: Creating an Amazon EventBridge rule that runs on a schedule - Amazon EventBridge

4: What is Amazon EKS? - Amazon EKS

5: CronJob - Kubernetes

6: What is Amazon EC2? - Amazon EC2

7: Crontab in Linux with 20 Useful Examples to Schedule Jobs - Tecmint

8: What is AWS Batch? - AWS Batch

9: Jobs - AWS Batch

An developer is building a serverless application by using the AWS Serverless Application Model (AWS SAM). The developer is currently testing the application in a development environment. When the application is nearly finsihed, the developer will need to set up additional testing and staging environments for a quality assurance team.

The developer wants to use a feature of the AWS SAM to set up deployments to multiple environments.

Which solution will meet these requirements with the LEAST development effort?

A.
Add a configuration file in TOML format to group configuration entries to every environment. Add a table for each testing and staging environment. Deploy updates to the environments by using the sam deploy command and the --config-env flag that corresponds to the each environment.
A.
Add a configuration file in TOML format to group configuration entries to every environment. Add a table for each testing and staging environment. Deploy updates to the environments by using the sam deploy command and the --config-env flag that corresponds to the each environment.
Answers
B.
Create additional AWS SAM templates for each testing and staging environment. Write a custom shell script that uses the sam deploy command and the --template-file flag to deploy updates to the environments.
B.
Create additional AWS SAM templates for each testing and staging environment. Write a custom shell script that uses the sam deploy command and the --template-file flag to deploy updates to the environments.
Answers
C.
Create one AWS SAM configuration file that has default parameters. Perform updates to the testing and staging environments by using the ---parameter-overrides flag in the AWS SAM CLI and the parameters that the updates will override.
C.
Create one AWS SAM configuration file that has default parameters. Perform updates to the testing and staging environments by using the ---parameter-overrides flag in the AWS SAM CLI and the parameters that the updates will override.
Answers
D.
Use the existing AWS SAM template. Add additional parameters to configure specific attributes for the serverless function and database table resources that are in each environment. Deploy updates to the testing and staging environments by using the sam deploy command.
D.
Use the existing AWS SAM template. Add additional parameters to configure specific attributes for the serverless function and database table resources that are in each environment. Deploy updates to the testing and staging environments by using the sam deploy command.
Answers
Suggested answer: A

Explanation:

The correct answer is A. Add a configuration file in TOML format to group configuration entries to every environment. Add a table for each testing and staging environment. Deploy updates to the environments by using the sam deploy command and the --config-env flag that corresponds to the each environment.

A) Add a configuration file in TOML format to group configuration entries to every environment. Add a table for each testing and staging environment. Deploy updates to the environments by using the sam deploy command and the --config-env flag that corresponds to the each environment. This is correct. This solution will meet the requirements with the least development effort, because it uses a feature of the AWS SAM CLI that supports a project-level configuration file that can be used to configure AWS SAM CLI command parameter values1. The configuration file can have multiple environments, each with its own set of parameter values, such as stack name, region, capabilities, and more2. The developer can use the --config-env option to specify which environment to use when deploying the application3. This way, the developer can avoid creating multiple templates or scripts, or manually overriding parameters for each environment.

B) Create additional AWS SAM templates for each testing and staging environment. Write a custom shell script that uses the sam deploy command and the --template-file flag to deploy updates to the environments. This is incorrect. This solution will not meet the requirements with the least development effort, because it requires creating and maintaining multiple templates and scripts for each environment. This can introduce duplication, inconsistency, and complexity in the deployment process.

C) Create one AWS SAM configuration file that has default parameters. Perform updates to the testing and staging environments by using the ---parameter-overrides flag in the AWS SAM CLI and the parameters that the updates will override. This is incorrect. This solution will not meet the requirements with the least development effort, because it requires manually specifying and overriding parameters for each environment every time the developer deploys the application. This can be error-prone, tedious, and inefficient.

D) Use the existing AWS SAM template. Add additional parameters to configure specific attributes for the serverless function and database table resources that are in each environment. Deploy updates to the testing and staging environments by using the sam deploy command. This is incorrect. This solution will not meet the requirements with the least development effort, because it requires modifying the existing template and adding complexity to the resource definitions for each environment. This can also make it difficult to manage and track changes across different environments.

1: AWS SAM CLI configuration file - AWS Serverless Application Model

2: Configuration file basics - AWS Serverless Application Model

3: Specify a configuration file - AWS Serverless Application Model

A company notices that credentials that the company uses to connect to an external software as a service (SaaS) vendor are stored in a configuration file as plaintext.

The developer needs to secure the API credentials and enforce automatic credentials rotation on a quarterly basis.

Which solution will meet these requirements MOST securely?

A.
Use AWS Key Management Service (AWS KMS) to encrypt the configuration file. Decrypt the configuration file when users make API calls to the SaaS vendor. Enable rotation.
A.
Use AWS Key Management Service (AWS KMS) to encrypt the configuration file. Decrypt the configuration file when users make API calls to the SaaS vendor. Enable rotation.
Answers
B.
Retrieve temporary credentials from AWS Security Token Service (AWS STS) every 15 minutes. Use the temporary credentials when users make API calls to the SaaS vendor.
B.
Retrieve temporary credentials from AWS Security Token Service (AWS STS) every 15 minutes. Use the temporary credentials when users make API calls to the SaaS vendor.
Answers
C.
Store the credentials in AWS Secrets Manager and enable rotation. Configure the API to have Secrets Manager access.
C.
Store the credentials in AWS Secrets Manager and enable rotation. Configure the API to have Secrets Manager access.
Answers
D.
Store the credentials in AWS Systems Manager Parameter Store and enable rotation. Retrieve the credentials when users make API calls to the SaaS vendor.
D.
Store the credentials in AWS Systems Manager Parameter Store and enable rotation. Retrieve the credentials when users make API calls to the SaaS vendor.
Answers
Suggested answer: C

Explanation:

Store the credentials in AWS Secrets Manager and enable rotation. Configure the API to have Secrets Manager access. This is correct. This solution will meet the requirements most securely, because it uses a service that is designed to store and manage secrets such as API credentials.AWS Secrets Manager helps you protect access to your applications, services, and IT resources by enabling you to rotate, manage, and retrieve secrets throughout their lifecycle1.You can store secrets such as passwords, database strings, API keys, and license codes as encrypted values2.You can also configure automatic rotation of your secrets on a schedule that you specify3.You can use the AWS SDK or CLI to retrieve secrets from Secrets Manager when you need them4. This way, you can avoid storing credentials in plaintext files or hardcoding them in your code.


An application that runs on AWS receives messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in batches. The

application sends the data to another SQS queue to be consumed by another legacy application. The legacy system can take up to 5 minutes to process some transaction dat

a.

A developer wants to ensure that there are no out-of-order updates in the legacy system. The developer cannot alter the behavior of the legacy system.

Which solution will meet these requirements?

A.
Use an SQS FIFO queue. Configure the visibility timeout value.
A.
Use an SQS FIFO queue. Configure the visibility timeout value.
Answers
B.
Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the DelaySeconds values.
B.
Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the DelaySeconds values.
Answers
C.
Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the visibility timeout value.
C.
Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the visibility timeout value.
Answers
D.
Use an SQS FIFO queue. Configure the DelaySeconds value.
D.
Use an SQS FIFO queue. Configure the DelaySeconds value.
Answers
Suggested answer: A

Explanation:

An SQS FIFO queue is a type of queue that preserves the order of messages and ensures that each message is delivered and processed only once1. This is suitable for the scenario where the developer wants to ensure that there are no out-of-order updates in the legacy system.

The visibility timeout value is the amount of time that a message is invisible in the queue after a consumer receives it2. This prevents other consumers from processing the same message simultaneously.If the consumer does not delete the message before the visibility timeout expires, the message becomes visible again and another consumer can receive it2.

In this scenario, the developer needs to configure the visibility timeout value to be longer than the maximum processing time of the legacy system, which is 5 minutes. This will ensure that the message remains invisible in the queue until the legacy system finishes processing it and deletes it. This will prevent duplicate or out-of-order processing of messages by the legacy system.

A developer is troubleshooting an application in an integration environment. In the application, an Amazon Simple Queue Service (Amazon SQS) queue consumes messages and then an AWS Lambda function processes the messages. The Lambda function transforms the messages and makes an API call to a third-party service.

There has been an increase in application usage. The third-party API frequently returns an HTTP 429 Too Many Requests error message. The error message prevents a significant number of messages from being processed successfully.

How can the developer resolve this issue?

A.
Increase the SQS event source's batch size setting.
A.
Increase the SQS event source's batch size setting.
Answers
B.
Configure provisioned concurrency for the Lambda function based on the third-party API's documented rate limits.
B.
Configure provisioned concurrency for the Lambda function based on the third-party API's documented rate limits.
Answers
C.
Increase the retry attempts and maximum event age in the Lambda function's asynchronous configuration.
C.
Increase the retry attempts and maximum event age in the Lambda function's asynchronous configuration.
Answers
D.
Configure maximum concurrency on the SQS event source based on the third-party service's documented rate limits.
D.
Configure maximum concurrency on the SQS event source based on the third-party service's documented rate limits.
Answers
Suggested answer: D

Explanation:

Maximum concurrency for SQS as an event source allows customers to control the maximum concurrent invokes by the SQS event source1.When multiple SQS event sources are configured to a function, customers can control the maximum concurrent invokes of individual SQS event source1.

In this scenario, the developer needs to resolve the issue of the third-party API frequently returning an HTTP 429 Too Many Requests error message, which prevents a significant number of messages from being processed successfully. To achieve this, the developer can follow these steps:

Find out the documented rate limits of the third-party API, which specify how many requests can be made in a given time period.

Configure maximum concurrency on the SQS event source based on the rate limits of the third-party API. This will limit the number of concurrent invokes by the SQS event source and prevent exceeding the rate limits of the third-party API.

Test and monitor the application performance and adjust the maximum concurrency value as needed.

By using this solution, the developer can reduce the frequency of HTTP 429 errors and improve the message processing success rate. The developer can also avoid throttling or blocking by the third-party API.

An online sales company is developing a serverless application that runs on AWS. The application uses an AWS Lambda function that calculates order success rates and stores the data in an Amazon DynamoDB table. A developer wants an efficient way to invoke the Lambda function every 15 minutes.

Which solution will meet this requirement with the LEAST development effort?

A.
Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of the EventBridge rule.
A.
Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of the EventBridge rule.
Answers
B.
Create an AWS Systems Manager document that has a script that will invoke the Lambda function on Amazon EC2. Use a Systems Manager Run Command task to run the shell script every 15 minutes.
B.
Create an AWS Systems Manager document that has a script that will invoke the Lambda function on Amazon EC2. Use a Systems Manager Run Command task to run the shell script every 15 minutes.
Answers
C.
Create an AWS Step Functions state machine. Configure the state machine to invoke the Lambda function execution role at a specified interval by using a Wait state. Set the interval to 15 minutes.
C.
Create an AWS Step Functions state machine. Configure the state machine to invoke the Lambda function execution role at a specified interval by using a Wait state. Set the interval to 15 minutes.
Answers
D.
Provision a small Amazon EC2 instance. Set up a cron job that invokes the Lambda function every 15 minutes.
D.
Provision a small Amazon EC2 instance. Set up a cron job that invokes the Lambda function every 15 minutes.
Answers
Suggested answer: A

Explanation:

The best solution for this requirement is option

A) Creating an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes and adding the Lambda function as the target of the EventBridge rule is the most efficient way to invoke the Lambda function periodically.This solution does not require any additional resources or development effort, and it leverages the built-in scheduling capabilities of EventBridge1.

Total 292 questions
Go to page: of 30