List of questions
Related questions
Question 243 - DVA-C02 discussion
A developer is building a microservice that uses AWS Lambda to process messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The Lambda function calls external APIs to enrich the SOS message data before loading the data into an Amazon Redshift data warehouse. The SOS queue must handle a maximum of 1.000 messages per second.
During initial testing, the Lambda function repeatedly inserted duplicate data into the Amazon Redshift table. The duplicate data led to a problem with data analysis. All duplicate messages were submitted to the queue within 1 minute of each other.
How should the developer resolve this issue?
Create an SOS FIFO queue. Enable message deduplication on the SOS FIFO queue.
Reduce the maximum Lambda concurrency that the SOS queue can invoke.
Use Lambda's temporary storage to keep track of processed message identifiers.
Configure a message group ID for every sent message. Enable message deduplication on the SQS standard queue.
0 comments
Leave a comment first