Apache Kafka client for Python; high-level & low-level consumer/producer, with great performance.
broker_version
kwarg to Broker.__init__
for the purpose of setting
api_version
in FetchResponse
topic_name
argument to Broker.join_group
for use in protocol metadata,
visible via the Administrative APIprint_managed_consumer_groups
to the CLItimestamp
kwarg to Producer.produce
to pass on messages when the broker
supports newer message formatsProducer.produce
to return the produced Message
instanceprotocol_version
and timestamp
kwargs to Message
fetch_error_backoff_ms
kwarg on SimpleConsumer
unblock_event
kwarg to SimpleConsumer.consume
used to notify the consumer
that its parent BalancedConsumer
is in the process of rebalancingcleanup
function to SimpleConsumer
Event
that notifies the internal SimpleConsumer
of a BalancedConsumer
that a rebalance is in progress, fixing a bug causing partitions to be unreleasedBalancedConsumer
when there are no partitions
availableGroupMembershipProtocol
objects and become compatible with the Administrative APIkafka_tools.reset_offsets
not to work in python 3MessageSet
compression scheme in API v1rdkafka.SimpleConsumer
causing exceptions not to be raised from worker
threadsfetch_offsets
not to raise exceptions under certain conditions
when it should__main__
rdkafka
extension on platforms that don't support itBroker
and Cluster
for Kafka 0.10's Administrative APIMemberAssignment
protocol API to more closely match the schema defined
by Kafkasix.reraise
to raise worker thread exceptions for easier
debuggingProducer
delivery reportsConsumerGroupProtocolMetadata
from
bytestrings returned from KafkaBroker
, Cluster
, Connection
produce()
Producer
to avoid certain deadlock situationsrange
strategybroker.version.fallback
struct.error
exceptions in Producer._send_request
Broker
, Cluster
, Connection
produce()
Producer
to avoid certain deadlock situationsrange
strategybroker.version.fallback
Broker
and Cluster
for Kafka 0.10's Administrative APIMemberAssignment
protocol API to more closely match the schema defined
by Kafkasix.reraise
to raise worker thread exceptions for easier
debuggingProducer
delivery reportsstruct.error
exceptions in Producer._send_request
ConsumerGroupProtocolMetadata
from
bytestrings returned from Kafkabroker_version
kwarg to several components. It's currently only
used by the librdkafka features. The kwarg is used to facilitate the use of
librdkafka via pykafka against multiple Kafka broker versions.GroupHashingPartitioner
consumer_timeout_ms
, which had been broken for
BalancedConsumer
groupsProducer.__del__
to crash during finalizationProducer.flush()
to wait for linger_ms
during calls initiated
by _update()
Producer._update
and OwnedBroker.flush
causing
infinite retry loopsProducer.produce
to block while the internal broker list is being updated.
This avoids possible mismatches between old and new cluster metadata used by the
Producer
.b''
in python3. :warning:Since this change adjusts ZooKeeper storage formats, it
should be applied with caution to production systems. Deploying this change without a
careful rollout plan could cause consumers to lose track of their offsets.:warning:BrokerConnection
testinstances
library on which the tests dependYou can install this release via pip with pip install --pre pykafka==2.5.0.dev1
.
It will not automatically install because it's a pre-release.
broker_version
kwarg to several components. It's currently only
used by the librdkafka features. The kwarg is used to facilitate the use of
librdkafka via pykafka against multiple Kafka broker versions.GroupHashingPartitioner
consumer_timeout_ms
, which had been broken for
BalancedConsumer
groupsProducer.__del__
to crash during finalizationProducer.flush()
to wait for linger_ms
during calls initiated
by _update()
Producer._update
and OwnedBroker.flush
causing
infinite retry loopsProducer.produce
to block while the internal broker list is being updated.
This avoids possible mismatches between old and new cluster metadata used by the
Producer
.testinstances
library on which the tests dependYou can install this release via pip with pip install --pre pykafka==2.4.1.dev1
.
It will not automatically install because it's a pre-release.
b''
. :warning:Since this change adjusts ZooKeeper storage formats, it should be applied with
caution to production systems. Deploying this change without a careful rollout plan
could cause consumers to lose track of their offsets.:warning:BrokerConnection
Cluster
that treated hosts
as a ZooKeeper
connection stringblock_on_queue_full
kwarg from the rdkafka producermax_request_size
kwarg to the rdkafka producerkafka_tools.py
NoneType
crash in Producer
when rejecting larger messagesProducer
integration tests from sharing a Consumer
instance to make test
runs more consistent