Skip to content

Commit 3b0216c

Browse files
dpkpHalfSweet
andauthored
Update some RST documentation syntax (#2463)
Co-authored-by: HalfSweet <60973476+HalfSweet@users.noreply.github.com>
1 parent 5daed04 commit 3b0216c

File tree

2 files changed

+160
-104
lines changed

2 files changed

+160
-104
lines changed

README.rst

Lines changed: 92 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,15 @@ check code (perhaps using zookeeper or consul). For older brokers, you can
3232
achieve something similar by manually assigning different partitions to each
3333
consumer instance with config management tools like chef, ansible, etc. This
3434
approach will work fine, though it does not support rebalancing on failures.
35-
See <https://kafka-python.readthedocs.io/en/master/compatibility.html>
35+
See https://kafka-python.readthedocs.io/en/master/compatibility.html
3636
for more details.
3737

3838
Please note that the master branch may contain unreleased features. For release
3939
documentation, please see readthedocs and/or python's inline help.
4040

41-
>>> pip install kafka-python
41+
.. code-block:: bash
42+
43+
$ pip install kafka-python
4244
4345
4446
KafkaConsumer
@@ -48,89 +50,119 @@ KafkaConsumer is a high-level message consumer, intended to operate as similarly
4850
as possible to the official java client. Full support for coordinated
4951
consumer groups requires use of kafka brokers that support the Group APIs: kafka v0.9+.
5052

51-
See <https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html>
53+
See https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html
5254
for API and configuration details.
5355

5456
The consumer iterator returns ConsumerRecords, which are simple namedtuples
5557
that expose basic message attributes: topic, partition, offset, key, and value:
5658

57-
>>> from kafka import KafkaConsumer
58-
>>> consumer = KafkaConsumer('my_favorite_topic')
59-
>>> for msg in consumer:
60-
... print (msg)
59+
.. code-block:: python
60+
61+
from kafka import KafkaConsumer
62+
consumer = KafkaConsumer('my_favorite_topic')
63+
for msg in consumer:
64+
print (msg)
65+
66+
.. code-block:: python
67+
68+
# join a consumer group for dynamic partition assignment and offset commits
69+
from kafka import KafkaConsumer
70+
consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
71+
for msg in consumer:
72+
print (msg)
6173
62-
>>> # join a consumer group for dynamic partition assignment and offset commits
63-
>>> from kafka import KafkaConsumer
64-
>>> consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
65-
>>> for msg in consumer:
66-
... print (msg)
74+
.. code-block:: python
6775
68-
>>> # manually assign the partition list for the consumer
69-
>>> from kafka import TopicPartition
70-
>>> consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
71-
>>> consumer.assign([TopicPartition('foobar', 2)])
72-
>>> msg = next(consumer)
76+
# manually assign the partition list for the consumer
77+
from kafka import TopicPartition
78+
consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
79+
consumer.assign([TopicPartition('foobar', 2)])
80+
msg = next(consumer)
7381
74-
>>> # Deserialize msgpack-encoded values
75-
>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)
76-
>>> consumer.subscribe(['msgpackfoo'])
77-
>>> for msg in consumer:
78-
... assert isinstance(msg.value, dict)
82+
.. code-block:: python
7983
80-
>>> # Access record headers. The returned value is a list of tuples
81-
>>> # with str, bytes for key and value
82-
>>> for msg in consumer:
83-
... print (msg.headers)
84+
# Deserialize msgpack-encoded values
85+
consumer = KafkaConsumer(value_deserializer=msgpack.loads)
86+
consumer.subscribe(['msgpackfoo'])
87+
for msg in consumer:
88+
assert isinstance(msg.value, dict)
8489
85-
>>> # Get consumer metrics
86-
>>> metrics = consumer.metrics()
90+
.. code-block:: python
91+
92+
# Access record headers. The returned value is a list of tuples
93+
# with str, bytes for key and value
94+
for msg in consumer:
95+
print (msg.headers)
96+
97+
.. code-block:: python
98+
99+
# Get consumer metrics
100+
metrics = consumer.metrics()
87101
88102
89103
KafkaProducer
90104
*************
91105

92106
KafkaProducer is a high-level, asynchronous message producer. The class is
93107
intended to operate as similarly as possible to the official java client.
94-
See <https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html>
108+
See https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html
95109
for more details.
96110

97-
>>> from kafka import KafkaProducer
98-
>>> producer = KafkaProducer(bootstrap_servers='localhost:1234')
99-
>>> for _ in range(100):
100-
... producer.send('foobar', b'some_message_bytes')
111+
.. code-block:: python
112+
113+
from kafka import KafkaProducer
114+
producer = KafkaProducer(bootstrap_servers='localhost:1234')
115+
for _ in range(100):
116+
producer.send('foobar', b'some_message_bytes')
117+
118+
.. code-block:: python
119+
120+
# Block until a single message is sent (or timeout)
121+
future = producer.send('foobar', b'another_message')
122+
result = future.get(timeout=60)
123+
124+
.. code-block:: python
125+
126+
# Block until all pending messages are at least put on the network
127+
# NOTE: This does not guarantee delivery or success! It is really
128+
# only useful if you configure internal batching using linger_ms
129+
producer.flush()
130+
131+
.. code-block:: python
132+
133+
# Use a key for hashed-partitioning
134+
producer.send('foobar', key=b'foo', value=b'bar')
135+
136+
.. code-block:: python
137+
138+
# Serialize json messages
139+
import json
140+
producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
141+
producer.send('fizzbuzz', {'foo': 'bar'})
101142
102-
>>> # Block until a single message is sent (or timeout)
103-
>>> future = producer.send('foobar', b'another_message')
104-
>>> result = future.get(timeout=60)
143+
.. code-block:: python
105144
106-
>>> # Block until all pending messages are at least put on the network
107-
>>> # NOTE: This does not guarantee delivery or success! It is really
108-
>>> # only useful if you configure internal batching using linger_ms
109-
>>> producer.flush()
145+
# Serialize string keys
146+
producer = KafkaProducer(key_serializer=str.encode)
147+
producer.send('flipflap', key='ping', value=b'1234')
110148
111-
>>> # Use a key for hashed-partitioning
112-
>>> producer.send('foobar', key=b'foo', value=b'bar')
149+
.. code-block:: python
113150
114-
>>> # Serialize json messages
115-
>>> import json
116-
>>> producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
117-
>>> producer.send('fizzbuzz', {'foo': 'bar'})
151+
# Compress messages
152+
producer = KafkaProducer(compression_type='gzip')
153+
for i in range(1000):
154+
producer.send('foobar', b'msg %d' % i)
118155
119-
>>> # Serialize string keys
120-
>>> producer = KafkaProducer(key_serializer=str.encode)
121-
>>> producer.send('flipflap', key='ping', value=b'1234')
156+
.. code-block:: python
122157
123-
>>> # Compress messages
124-
>>> producer = KafkaProducer(compression_type='gzip')
125-
>>> for i in range(1000):
126-
... producer.send('foobar', b'msg %d' % i)
158+
# Include record headers. The format is list of tuples with string key
159+
# and bytes value.
160+
producer.send('foobar', value=b'c29tZSB2YWx1ZQ==', headers=[('content-encoding', b'base64')])
127161
128-
>>> # Include record headers. The format is list of tuples with string key
129-
>>> # and bytes value.
130-
>>> producer.send('foobar', value=b'c29tZSB2YWx1ZQ==', headers=[('content-encoding', b'base64')])
162+
.. code-block:: python
131163
132-
>>> # Get producer performance metrics
133-
>>> metrics = producer.metrics()
164+
# Get producer performance metrics
165+
metrics = producer.metrics()
134166
135167
136168
Thread safety
@@ -154,7 +186,7 @@ kafka-python supports the following compression formats:
154186
- Zstandard (zstd)
155187

156188
gzip is supported natively, the others require installing additional libraries.
157-
See <https://kafka-python.readthedocs.io/en/master/install.html> for more information.
189+
See https://kafka-python.readthedocs.io/en/master/install.html for more information.
158190

159191

160192
Optimized CRC32 Validation
@@ -163,7 +195,7 @@ Optimized CRC32 Validation
163195
Kafka uses CRC32 checksums to validate messages. kafka-python includes a pure
164196
python implementation for compatibility. To improve performance for high-throughput
165197
applications, kafka-python will use `crc32c` for optimized native code if installed.
166-
See <https://kafka-python.readthedocs.io/en/master/install.html> for installation instructions.
198+
See https://kafka-python.readthedocs.io/en/master/install.html for installation instructions.
167199
See https://pypi.org/project/crc32c/ for details on the underlying crc32c lib.
168200

169201

docs/index.rst

Lines changed: 68 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,9 @@ failures. See `Compatibility <compatibility.html>`_ for more details.
3131
Please note that the master branch may contain unreleased features. For release
3232
documentation, please see readthedocs and/or python's inline help.
3333

34-
>>> pip install kafka-python
34+
.. code:: bash
35+
36+
pip install kafka-python
3537
3638
3739
KafkaConsumer
@@ -47,28 +49,36 @@ See `KafkaConsumer <apidoc/KafkaConsumer.html>`_ for API and configuration detai
4749
The consumer iterator returns ConsumerRecords, which are simple namedtuples
4850
that expose basic message attributes: topic, partition, offset, key, and value:
4951

50-
>>> from kafka import KafkaConsumer
51-
>>> consumer = KafkaConsumer('my_favorite_topic')
52-
>>> for msg in consumer:
53-
... print (msg)
52+
.. code:: python
53+
54+
from kafka import KafkaConsumer
55+
consumer = KafkaConsumer('my_favorite_topic')
56+
for msg in consumer:
57+
print (msg)
58+
59+
.. code:: python
5460
55-
>>> # join a consumer group for dynamic partition assignment and offset commits
56-
>>> from kafka import KafkaConsumer
57-
>>> consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
58-
>>> for msg in consumer:
59-
... print (msg)
61+
# join a consumer group for dynamic partition assignment and offset commits
62+
from kafka import KafkaConsumer
63+
consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
64+
for msg in consumer:
65+
print (msg)
6066
61-
>>> # manually assign the partition list for the consumer
62-
>>> from kafka import TopicPartition
63-
>>> consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
64-
>>> consumer.assign([TopicPartition('foobar', 2)])
65-
>>> msg = next(consumer)
67+
.. code:: python
6668
67-
>>> # Deserialize msgpack-encoded values
68-
>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)
69-
>>> consumer.subscribe(['msgpackfoo'])
70-
>>> for msg in consumer:
71-
... assert isinstance(msg.value, dict)
69+
# manually assign the partition list for the consumer
70+
from kafka import TopicPartition
71+
consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
72+
consumer.assign([TopicPartition('foobar', 2)])
73+
msg = next(consumer)
74+
75+
.. code:: python
76+
77+
# Deserialize msgpack-encoded values
78+
consumer = KafkaConsumer(value_deserializer=msgpack.loads)
79+
consumer.subscribe(['msgpackfoo'])
80+
for msg in consumer:
81+
assert isinstance(msg.value, dict)
7282
7383
7484
KafkaProducer
@@ -78,36 +88,50 @@ KafkaProducer
7888
The class is intended to operate as similarly as possible to the official java
7989
client. See `KafkaProducer <apidoc/KafkaProducer.html>`_ for more details.
8090

81-
>>> from kafka import KafkaProducer
82-
>>> producer = KafkaProducer(bootstrap_servers='localhost:1234')
83-
>>> for _ in range(100):
84-
... producer.send('foobar', b'some_message_bytes')
91+
.. code:: python
92+
93+
from kafka import KafkaProducer
94+
producer = KafkaProducer(bootstrap_servers='localhost:1234')
95+
for _ in range(100):
96+
producer.send('foobar', b'some_message_bytes')
97+
98+
.. code:: python
99+
100+
# Block until a single message is sent (or timeout)
101+
future = producer.send('foobar', b'another_message')
102+
result = future.get(timeout=60)
103+
104+
.. code:: python
105+
106+
# Block until all pending messages are at least put on the network
107+
# NOTE: This does not guarantee delivery or success! It is really
108+
# only useful if you configure internal batching using linger_ms
109+
producer.flush()
110+
111+
.. code:: python
112+
113+
# Use a key for hashed-partitioning
114+
producer.send('foobar', key=b'foo', value=b'bar')
85115
86-
>>> # Block until a single message is sent (or timeout)
87-
>>> future = producer.send('foobar', b'another_message')
88-
>>> result = future.get(timeout=60)
116+
.. code:: python
89117
90-
>>> # Block until all pending messages are at least put on the network
91-
>>> # NOTE: This does not guarantee delivery or success! It is really
92-
>>> # only useful if you configure internal batching using linger_ms
93-
>>> producer.flush()
118+
# Serialize json messages
119+
import json
120+
producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
121+
producer.send('fizzbuzz', {'foo': 'bar'})
94122
95-
>>> # Use a key for hashed-partitioning
96-
>>> producer.send('foobar', key=b'foo', value=b'bar')
123+
.. code:: python
97124
98-
>>> # Serialize json messages
99-
>>> import json
100-
>>> producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
101-
>>> producer.send('fizzbuzz', {'foo': 'bar'})
125+
# Serialize string keys
126+
producer = KafkaProducer(key_serializer=str.encode)
127+
producer.send('flipflap', key='ping', value=b'1234')
102128
103-
>>> # Serialize string keys
104-
>>> producer = KafkaProducer(key_serializer=str.encode)
105-
>>> producer.send('flipflap', key='ping', value=b'1234')
129+
.. code:: python
106130
107-
>>> # Compress messages
108-
>>> producer = KafkaProducer(compression_type='gzip')
109-
>>> for i in range(1000):
110-
... producer.send('foobar', b'msg %d' % i)
131+
# Compress messages
132+
producer = KafkaProducer(compression_type='gzip')
133+
for i in range(1000):
134+
producer.send('foobar', b'msg %d' % i)
111135
112136
113137
Thread safety

0 commit comments

Comments
 (0)