This repository was archived by the owner on Jan 19, 2022. It is now read-only.
This repository was archived by the owner on Jan 19, 2022. It is now read-only.
Decouple concurrency and message batch size in SQS listener #379
Description
Enhancement
The SimpleMessageListenerContainer
has a basic concurrency model that works as follows. While the queue is running:
- Request
maxNumberOfMessages
messages from SQS - Submit all messages in the batch to a thread pool to be handled. If using the default thread pool all messages will be handled in parallel with no queuing.
- Block until all messages in the batch have completed handling.
This approach is simple but has the disadvantage of coupling message processing concurrency and message batch size. There is no easy way to request multiple messages at once but only process one at a time. (This can potentially be achieved by configuring a custom task executor, but that doesn't work well with more than one queue because all queues share the same executor.) Likewise, there is no way to handle more than 10 messages at a time (the maximum SQS batch size).
As an enhancement, I would like to request support for the following use cases on a per-queue basis:
maxConcurrency < maxNumberOfMessages
: Request n messages at a time but limit concurrent processing to m < n.maxConcurrency >= maxNumberOfMessages
: Request n messages at a time and allow concurrent processing of up to m >= n total messages.