Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DVA-C02 Topic 1 Question 47 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 47
Topic #: 1
[All DVA-C02 Questions]

A company created an application to consume and process dat

a. The application uses Amazon SQS and AWS Lambda functions. The application is currently working as expected, but it occasionally receives several messages that it cannot process properly. The company needs to clear these messages to prevent the queue from becoming blocked. A developer must implement a solution that makes queue processing always operational. The solution must give the company the ability to defer the messages with errors and save these messages for further analysis. What is the MOST operationally efficient solution that meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: B

Using a dead-letter queue (DLQ) with Amazon SQS is the most operationally efficient solution for handling unprocessable messages.

Amazon SQS Dead-Letter Queue:

A DLQ is used to capture messages that fail processing after a specified number of attempts.

Allows the application to continue processing other messages without being blocked.

Messages in the DLQ can be analyzed later for debugging and resolution.

Why DLQ is the Best Option:

Operational Efficiency: Automatically defers messages with errors, ensuring the queue is not blocked.

Analysis Ready: Messages in the DLQ can be inspected to identify recurring issues.

Scalable: Works seamlessly with Lambda and SQS at scale.

Why Not Other Options:

Option A: Logs the messages but does not resolve the queue blockage issue.

Option C: FIFO queues and 0-second retention do not provide error handling or analysis capabilities.

Option D: Alerts administrators but does not handle or store the unprocessable messages.

Steps to Implement:

Create a new SQS queue to serve as the DLQ.

Attach the DLQ to the primary queue and configure the Maximum Receives setting.


Using Amazon SQS Dead-Letter Queues

Best Practices for Using Amazon SQS with AWS Lambda

Contribute your Thoughts:

Cassie
1 months ago
Hold up, did someone say FIFO queue? I'm not touching that with a 10-foot pole! That's just asking for trouble.
upvoted 0 times
Karrie
7 days ago
Yeah, using CloudWatch Logs to save error messages separately sounds like a good solution too.
upvoted 0 times
...
Lorriane
13 days ago
I think setting up a dead-letter queue with Maximum Receives configured might be a better option.
upvoted 0 times
...
Peggie
14 days ago
I agree, FIFO queues can be tricky to work with.
upvoted 0 times
...
...
Kenny
1 months ago
I see your point, Melinda. But I think option A is also a good choice as it saves error messages to a separate log stream for easy access.
upvoted 0 times
...
Shaunna
1 months ago
Option D could work, but setting up the CloudWatch alarm and SNS notifications might be overkill for this particular scenario. B is the simplest and most effective solution.
upvoted 0 times
Helga
22 days ago
Let's go with option B then, it's the simplest and most effective solution.
upvoted 0 times
...
Abraham
23 days ago
I agree, setting up a dead-letter queue seems like the most efficient solution.
upvoted 0 times
...
Terrilyn
25 days ago
I think option B is the best choice here.
upvoted 0 times
...
...
Ty
2 months ago
Hmm, I'm not sure about option C. Reducing the message retention period to 0 seconds doesn't sound like a good idea. We need to hold onto those failed messages for further analysis.
upvoted 0 times
...
Ettie
2 months ago
I agree with Annelle. Sending the failed messages to a dead-letter queue gives us the ability to review and retry them later. Seems like the most efficient solution.
upvoted 0 times
Ashley
23 days ago
It's definitely the most operationally efficient solution for our application.
upvoted 0 times
...
Thora
24 days ago
We can then review and retry the messages later without affecting the main processing flow.
upvoted 0 times
...
Barney
26 days ago
I agree, sending the failed messages to a dead-letter queue is a good way to prevent the main queue from getting blocked.
upvoted 0 times
...
Cassandra
29 days ago
I think option B is the best solution. It allows us to save the error messages for further analysis.
upvoted 0 times
...
...
Melinda
2 months ago
I disagree, I believe option D is more efficient as it notifies administrator users immediately.
upvoted 0 times
...
Marvel
2 months ago
I think option B is the best solution because it allows us to save the error messages for further analysis.
upvoted 0 times
...
Annelle
2 months ago
Option B seems like the way to go. Creating a dead-letter queue is a solid approach to handling those problematic messages.
upvoted 0 times
Tamra
1 months ago
Definitely, it will help prevent the main queue from getting blocked.
upvoted 0 times
...
Andra
1 months ago
I agree, setting up a dead-letter queue is a good way to handle those errors.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77