Skip to content

Commit 27f2937

Browse files
committed
docs: update accessing processed messages section
1 parent be8ab03 commit 27f2937

File tree

3 files changed

+61
-23
lines changed

3 files changed

+61
-23
lines changed

aws_lambda_powertools/utilities/batch/__init__.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,23 @@
44
Batch processing utility
55
"""
66

7-
from aws_lambda_powertools.utilities.batch.base import BasePartialProcessor, BatchProcessor, EventType, batch_processor
7+
from aws_lambda_powertools.utilities.batch.base import (
8+
BasePartialProcessor,
9+
BatchProcessor,
10+
EventType,
11+
FailureResponse,
12+
SuccessResponse,
13+
batch_processor,
14+
)
815
from aws_lambda_powertools.utilities.batch.sqs import PartialSQSProcessor, sqs_batch_processor
916

1017
__all__ = (
1118
"BatchProcessor",
1219
"BasePartialProcessor",
1320
"EventType",
21+
"FailureResponse",
1422
"PartialSQSProcessor",
23+
"SuccessResponse",
1524
"batch_processor",
1625
"sqs_batch_processor",
1726
)

aws_lambda_powertools/utilities/batch/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ class EventType(Enum):
4646

4747
# When using processor with default arguments, records will carry EventSourceDataClassTypes
4848
# and depending on what EventType it's passed it'll correctly map to the right record
49-
# When using Pydantic Models, it'll accept any
49+
# When using Pydantic Models, it'll accept any subclass from SQS, DynamoDB and Kinesis
5050
EventSourceDataClassTypes = Union[SQSRecord, KinesisStreamRecord, DynamoDBRecord]
5151
BatchEventTypes = Union[EventSourceDataClassTypes, "BatchTypeModels"]
5252
SuccessResponse = Tuple[str, Any, BatchEventTypes]

docs/utilities/batch.md

Lines changed: 50 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -707,39 +707,68 @@ You need to create a function to handle each record from the batch - We call it
707707

708708
## Advanced
709709

710-
<!-- ### Choosing between decorator and context manager
710+
### Accessing processed messages
711711

712-
They have nearly the same behaviour when it comes to processing messages from the batch:
712+
Use the context manager to access a list of all returned values from your `record_handler` function.
713713

714-
* **Entire batch has been successfully processed**, where your Lambda handler returned successfully, we will let SQS delete the batch to optimize your cost
715-
* **Entire Batch has been partially processed successfully**, where exceptions were raised within your `record handler`, we will:
716-
* **1)** Delete successfully processed messages from the queue by directly calling `sqs:DeleteMessageBatch`
717-
* **2)** Raise `SQSBatchProcessingError` to ensure failed messages return to your SQS queue
714+
> Signature: List[Tuple[Union[SuccessResponse, FailureResponse]]]
718715
719-
The only difference is that **PartialSQSProcessor** will give you access to processed messages if you need. -->
716+
* **When successful**. We will include a tuple with `success`, the result of `record_handler`, and the batch record
717+
* **When failed**. We will include a tuple with `fail`, exception as a string, and the batch record
720718

721-
<!-- ### Accessing processed messages
722-
723-
Use `PartialSQSProcessor` context manager to access a list of all return values from your `record_handler` function.
724719

725720
=== "app.py"
726721

727-
```python
728-
from aws_lambda_powertools.utilities.batch import PartialSQSProcessor
722+
```python hl_lines="31-38"
723+
import json
729724

730-
def record_handler(record):
731-
return do_something_with(record["body"])
725+
from typing import Any, List, Literal, Union
732726

733-
def lambda_handler(event, context):
734-
records = event["Records"]
727+
from aws_lambda_powertools import Logger, Tracer
728+
from aws_lambda_powertools.utilities.batch import (BatchProcessor,
729+
EventType,
730+
FailureResponse,
731+
SuccessResponse,
732+
batch_processor)
733+
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
734+
from aws_lambda_powertools.utilities.typing import LambdaContext
735735

736-
processor = PartialSQSProcessor()
737736

738-
with processor(records, record_handler) as proc:
739-
result = proc.process() # Returns a list of all results from record_handler
737+
processor = BatchProcessor(event_type=EventType.SQS)
738+
tracer = Tracer()
739+
logger = Logger()
740+
741+
742+
@tracer.capture_method
743+
def record_handler(record: SQSRecord):
744+
payload: str = record.body
745+
if payload:
746+
item: dict = json.loads(payload)
747+
...
748+
749+
@logger.inject_lambda_context
750+
@tracer.capture_lambda_handler
751+
def lambda_handler(event, context: LambdaContext):
752+
batch = event["Records"]
753+
with processor(records=batch, processor=processor):
754+
processed_messages: List[Union[SuccessResponse, FailureResponse]] = processor.process()
755+
756+
for messages in processed_messages:
757+
for message in messages:
758+
status: Union[Literal["success"], Literal["fail"]] = message[0]
759+
result: Any = message[1]
760+
record: SQSRecord = message[2]
761+
762+
763+
return processor.response()
764+
```
765+
766+
## FAQ
767+
768+
### Choosing between decorator and context manager
769+
770+
Use context manager when you want access to the processed messages or handle `BatchProcessingError` exception when all records within the batch fail to be processed.
740771

741-
return result
742-
``` -->
743772

744773
<!-- ### Customizing boto configuration
745774

0 commit comments

Comments
 (0)