Skip to content

Commit 6efab69

Browse files
wbarnhaHalfSweet
authored andcommitted
Update some RST documentation syntax (dpkp#130)
* docs: Update syntax in README.rst * docs: Update code block syntax in docs/index.rst --------- Co-authored-by: HalfSweet <[email protected]>
1 parent 2e284fb commit 6efab69

File tree

2 files changed

+174
-105
lines changed

2 files changed

+174
-105
lines changed

README.rst

Lines changed: 104 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,19 @@ check code (perhaps using zookeeper or consul). For older brokers, you can
3232
achieve something similar by manually assigning different partitions to each
3333
consumer instance with config management tools like chef, ansible, etc. This
3434
approach will work fine, though it does not support rebalancing on failures.
35-
See <https://kafka-python-ng.readthedocs.io/en/master/compatibility.html>
35+
36+
See https://kafka-python.readthedocs.io/en/master/compatibility.html
37+
3638
for more details.
3739

3840
Please note that the master branch may contain unreleased features. For release
3941
documentation, please see readthedocs and/or python's inline help.
4042

41-
>>> pip install kafka-python-ng
43+
44+
.. code-block:: bash
45+
46+
$ pip install kafka-python-ng
47+
4248
4349
4450
KafkaConsumer
@@ -48,89 +54,123 @@ KafkaConsumer is a high-level message consumer, intended to operate as similarly
4854
as possible to the official java client. Full support for coordinated
4955
consumer groups requires use of kafka brokers that support the Group APIs: kafka v0.9+.
5056

51-
See <https://kafka-python-ng.readthedocs.io/en/master/apidoc/KafkaConsumer.html>
57+
58+
See https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html
59+
5260
for API and configuration details.
5361

5462
The consumer iterator returns ConsumerRecords, which are simple namedtuples
5563
that expose basic message attributes: topic, partition, offset, key, and value:
5664

57-
>>> from kafka import KafkaConsumer
58-
>>> consumer = KafkaConsumer('my_favorite_topic')
59-
>>> for msg in consumer:
60-
... print (msg)
65+
.. code-block:: python
6166
62-
>>> # join a consumer group for dynamic partition assignment and offset commits
63-
>>> from kafka import KafkaConsumer
64-
>>> consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
65-
>>> for msg in consumer:
66-
... print (msg)
67+
from kafka import KafkaConsumer
68+
consumer = KafkaConsumer('my_favorite_topic')
69+
for msg in consumer:
70+
print (msg)
6771
68-
>>> # manually assign the partition list for the consumer
69-
>>> from kafka import TopicPartition
70-
>>> consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
71-
>>> consumer.assign([TopicPartition('foobar', 2)])
72-
>>> msg = next(consumer)
72+
.. code-block:: python
7373
74-
>>> # Deserialize msgpack-encoded values
75-
>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)
76-
>>> consumer.subscribe(['msgpackfoo'])
77-
>>> for msg in consumer:
78-
... assert isinstance(msg.value, dict)
74+
# join a consumer group for dynamic partition assignment and offset commits
75+
from kafka import KafkaConsumer
76+
consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
77+
for msg in consumer:
78+
print (msg)
7979
80-
>>> # Access record headers. The returned value is a list of tuples
81-
>>> # with str, bytes for key and value
82-
>>> for msg in consumer:
83-
... print (msg.headers)
80+
.. code-block:: python
8481
85-
>>> # Get consumer metrics
86-
>>> metrics = consumer.metrics()
82+
# manually assign the partition list for the consumer
83+
from kafka import TopicPartition
84+
consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
85+
consumer.assign([TopicPartition('foobar', 2)])
86+
msg = next(consumer)
87+
88+
.. code-block:: python
89+
90+
# Deserialize msgpack-encoded values
91+
consumer = KafkaConsumer(value_deserializer=msgpack.loads)
92+
consumer.subscribe(['msgpackfoo'])
93+
for msg in consumer:
94+
assert isinstance(msg.value, dict)
95+
96+
.. code-block:: python
97+
98+
# Access record headers. The returned value is a list of tuples
99+
# with str, bytes for key and value
100+
for msg in consumer:
101+
print (msg.headers)
102+
103+
.. code-block:: python
104+
105+
# Get consumer metrics
106+
metrics = consumer.metrics()
87107
88108
89109
KafkaProducer
90110
*************
91111

92112
KafkaProducer is a high-level, asynchronous message producer. The class is
93113
intended to operate as similarly as possible to the official java client.
94-
See <https://kafka-python-ng.readthedocs.io/en/master/apidoc/KafkaProducer.html>
114+
115+
See https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html
116+
95117
for more details.
96118

97-
>>> from kafka import KafkaProducer
98-
>>> producer = KafkaProducer(bootstrap_servers='localhost:1234')
99-
>>> for _ in range(100):
100-
... producer.send('foobar', b'some_message_bytes')
119+
.. code-block:: python
120+
121+
from kafka import KafkaProducer
122+
producer = KafkaProducer(bootstrap_servers='localhost:1234')
123+
for _ in range(100):
124+
producer.send('foobar', b'some_message_bytes')
125+
126+
.. code-block:: python
127+
128+
# Block until a single message is sent (or timeout)
129+
future = producer.send('foobar', b'another_message')
130+
result = future.get(timeout=60)
131+
132+
.. code-block:: python
133+
134+
# Block until all pending messages are at least put on the network
135+
# NOTE: This does not guarantee delivery or success! It is really
136+
# only useful if you configure internal batching using linger_ms
137+
producer.flush()
138+
139+
.. code-block:: python
101140
102-
>>> # Block until a single message is sent (or timeout)
103-
>>> future = producer.send('foobar', b'another_message')
104-
>>> result = future.get(timeout=60)
141+
# Use a key for hashed-partitioning
142+
producer.send('foobar', key=b'foo', value=b'bar')
105143
106-
>>> # Block until all pending messages are at least put on the network
107-
>>> # NOTE: This does not guarantee delivery or success! It is really
108-
>>> # only useful if you configure internal batching using linger_ms
109-
>>> producer.flush()
144+
.. code-block:: python
110145
111-
>>> # Use a key for hashed-partitioning
112-
>>> producer.send('foobar', key=b'foo', value=b'bar')
146+
# Serialize json messages
147+
import json
148+
producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
149+
producer.send('fizzbuzz', {'foo': 'bar'})
113150
114-
>>> # Serialize json messages
115-
>>> import json
116-
>>> producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
117-
>>> producer.send('fizzbuzz', {'foo': 'bar'})
151+
.. code-block:: python
118152
119-
>>> # Serialize string keys
120-
>>> producer = KafkaProducer(key_serializer=str.encode)
121-
>>> producer.send('flipflap', key='ping', value=b'1234')
153+
# Serialize string keys
154+
producer = KafkaProducer(key_serializer=str.encode)
155+
producer.send('flipflap', key='ping', value=b'1234')
122156
123-
>>> # Compress messages
124-
>>> producer = KafkaProducer(compression_type='gzip')
125-
>>> for i in range(1000):
126-
... producer.send('foobar', b'msg %d' % i)
157+
.. code-block:: python
127158
128-
>>> # Include record headers. The format is list of tuples with string key
129-
>>> # and bytes value.
130-
>>> producer.send('foobar', value=b'c29tZSB2YWx1ZQ==', headers=[('content-encoding', b'base64')])
159+
# Compress messages
160+
producer = KafkaProducer(compression_type='gzip')
161+
for i in range(1000):
162+
producer.send('foobar', b'msg %d' % i)
131163
132-
>>> # Get producer performance metrics
133-
>>> metrics = producer.metrics()
164+
.. code-block:: python
165+
166+
# Include record headers. The format is list of tuples with string key
167+
# and bytes value.
168+
producer.send('foobar', value=b'c29tZSB2YWx1ZQ==', headers=[('content-encoding', b'base64')])
169+
170+
.. code-block:: python
171+
172+
# Get producer performance metrics
173+
metrics = producer.metrics()
134174
135175
136176
Thread safety
@@ -154,16 +194,19 @@ kafka-python-ng supports the following compression formats:
154194
- Zstandard (zstd)
155195

156196
gzip is supported natively, the others require installing additional libraries.
157-
See <https://kafka-python-ng.readthedocs.io/en/master/install.html> for more information.
197+
198+
See https://kafka-python.readthedocs.io/en/master/install.html for more information.
199+
158200

159201

160202
Optimized CRC32 Validation
161203
**************************
162204

163205
Kafka uses CRC32 checksums to validate messages. kafka-python-ng includes a pure
164206
python implementation for compatibility. To improve performance for high-throughput
165-
applications, kafka-python-ng will use `crc32c` for optimized native code if installed.
166-
See <https://kafka-python-ng.readthedocs.io/en/master/install.html> for installation instructions.
207+
applications, kafka-python will use `crc32c` for optimized native code if installed.
208+
See https://kafka-python.readthedocs.io/en/master/install.html for installation instructions.
209+
167210
See https://pypi.org/project/crc32c/ for details on the underlying crc32c lib.
168211

169212

docs/index.rst

Lines changed: 70 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,11 @@ failures. See `Compatibility <compatibility.html>`_ for more details.
3131
Please note that the master branch may contain unreleased features. For release
3232
documentation, please see readthedocs and/or python's inline help.
3333

34-
>>> pip install kafka-python-ng
34+
35+
.. code:: bash
36+
37+
pip install kafka-python-ng
38+
3539
3640
3741
KafkaConsumer
@@ -47,28 +51,36 @@ See `KafkaConsumer <apidoc/KafkaConsumer.html>`_ for API and configuration detai
4751
The consumer iterator returns ConsumerRecords, which are simple namedtuples
4852
that expose basic message attributes: topic, partition, offset, key, and value:
4953

50-
>>> from kafka import KafkaConsumer
51-
>>> consumer = KafkaConsumer('my_favorite_topic')
52-
>>> for msg in consumer:
53-
... print (msg)
54+
.. code:: python
55+
56+
from kafka import KafkaConsumer
57+
consumer = KafkaConsumer('my_favorite_topic')
58+
for msg in consumer:
59+
print (msg)
60+
61+
.. code:: python
62+
63+
# join a consumer group for dynamic partition assignment and offset commits
64+
from kafka import KafkaConsumer
65+
consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
66+
for msg in consumer:
67+
print (msg)
68+
69+
.. code:: python
5470
55-
>>> # join a consumer group for dynamic partition assignment and offset commits
56-
>>> from kafka import KafkaConsumer
57-
>>> consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')
58-
>>> for msg in consumer:
59-
... print (msg)
71+
# manually assign the partition list for the consumer
72+
from kafka import TopicPartition
73+
consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
74+
consumer.assign([TopicPartition('foobar', 2)])
75+
msg = next(consumer)
6076
61-
>>> # manually assign the partition list for the consumer
62-
>>> from kafka import TopicPartition
63-
>>> consumer = KafkaConsumer(bootstrap_servers='localhost:1234')
64-
>>> consumer.assign([TopicPartition('foobar', 2)])
65-
>>> msg = next(consumer)
77+
.. code:: python
6678
67-
>>> # Deserialize msgpack-encoded values
68-
>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)
69-
>>> consumer.subscribe(['msgpackfoo'])
70-
>>> for msg in consumer:
71-
... assert isinstance(msg.value, dict)
79+
# Deserialize msgpack-encoded values
80+
consumer = KafkaConsumer(value_deserializer=msgpack.loads)
81+
consumer.subscribe(['msgpackfoo'])
82+
for msg in consumer:
83+
assert isinstance(msg.value, dict)
7284
7385
7486
KafkaProducer
@@ -78,36 +90,50 @@ KafkaProducer
7890
The class is intended to operate as similarly as possible to the official java
7991
client. See `KafkaProducer <apidoc/KafkaProducer.html>`_ for more details.
8092

81-
>>> from kafka import KafkaProducer
82-
>>> producer = KafkaProducer(bootstrap_servers='localhost:1234')
83-
>>> for _ in range(100):
84-
... producer.send('foobar', b'some_message_bytes')
93+
.. code:: python
94+
95+
from kafka import KafkaProducer
96+
producer = KafkaProducer(bootstrap_servers='localhost:1234')
97+
for _ in range(100):
98+
producer.send('foobar', b'some_message_bytes')
99+
100+
.. code:: python
101+
102+
# Block until a single message is sent (or timeout)
103+
future = producer.send('foobar', b'another_message')
104+
result = future.get(timeout=60)
105+
106+
.. code:: python
107+
108+
# Block until all pending messages are at least put on the network
109+
# NOTE: This does not guarantee delivery or success! It is really
110+
# only useful if you configure internal batching using linger_ms
111+
producer.flush()
112+
113+
.. code:: python
114+
115+
# Use a key for hashed-partitioning
116+
producer.send('foobar', key=b'foo', value=b'bar')
85117
86-
>>> # Block until a single message is sent (or timeout)
87-
>>> future = producer.send('foobar', b'another_message')
88-
>>> result = future.get(timeout=60)
118+
.. code:: python
89119
90-
>>> # Block until all pending messages are at least put on the network
91-
>>> # NOTE: This does not guarantee delivery or success! It is really
92-
>>> # only useful if you configure internal batching using linger_ms
93-
>>> producer.flush()
120+
# Serialize json messages
121+
import json
122+
producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
123+
producer.send('fizzbuzz', {'foo': 'bar'})
94124
95-
>>> # Use a key for hashed-partitioning
96-
>>> producer.send('foobar', key=b'foo', value=b'bar')
125+
.. code:: python
97126
98-
>>> # Serialize json messages
99-
>>> import json
100-
>>> producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))
101-
>>> producer.send('fizzbuzz', {'foo': 'bar'})
127+
# Serialize string keys
128+
producer = KafkaProducer(key_serializer=str.encode)
129+
producer.send('flipflap', key='ping', value=b'1234')
102130
103-
>>> # Serialize string keys
104-
>>> producer = KafkaProducer(key_serializer=str.encode)
105-
>>> producer.send('flipflap', key='ping', value=b'1234')
131+
.. code:: python
106132
107-
>>> # Compress messages
108-
>>> producer = KafkaProducer(compression_type='gzip')
109-
>>> for i in range(1000):
110-
... producer.send('foobar', b'msg %d' % i)
133+
# Compress messages
134+
producer = KafkaProducer(compression_type='gzip')
135+
for i in range(1000):
136+
producer.send('foobar', b'msg %d' % i)
111137
112138
113139
Thread safety

0 commit comments

Comments
 (0)