asyncio client for kafka
New features:
consumer.last_poll_timestamp(partition)
which gives the ms timestamp of the last
update of highwater
and lso
. (issue #523 and pr #526 by @aure-olli)Bugfixes:
RequestTimedOutError
in coordinator._do_commit_offsets()
method to explicitly mark
coordinator as dead. (issue #584 and pr #585 by @FedirAlifirenko)asyncio.TimeoutError
on metadata request to broker and metadata update.
(issue #576 and pr #577 by @MichalMazurek)Improved Documentation:
Bugfixes:
New features:
api_version="0.9"
.
(Big thanks to @cyrbil and @jsurloppe for working on this)Bugfixes:
New features:
enable_idempotence
parameterfetch_max_bytes
in AIOKafkaConsumer. This can help limit
the amount of data transferred in a single roundtrip to the broker, which is
essential for consumers with a large number of partitionsBugfixes:
group=None
resetting offsets on every metadata update
(issue #441)api_version
parameter. Before it ignored the
parameterFix issue #444 and #436 related to a memory leak in asyncio.shield()
The work here was concentrated on fixing bugs after the coordination refactors on v0.4.0. Hope it will serve you better now!
Bugfix:
Infrastructure:
Major changes:
Minor changes:
timestamp
field to produced message's metadata. This is needed to find
LOG_APPEND_TIME configured timestamps.Consumer.seek()
and similar API's now raise proper ValueError
's on
validation failure instead of AssertionError
.Bug fixes:
connections_max_idle_ms
option, as earlier it was only applied to
bootstrap socket. (PR #299)consumer.stop()
side effect of logging an exception
ConsumerStoppedError (issue #263)InvalidStateError('Exception is not set.')
(PR #249 by @aerkert)GroupCoordinator._on_join_prepare()
if commit_offset()
throws exception (PR #230 by @shargan)Big thanks to:
AIOKafkaProducer.flush()
method. (PR #209 by @vineet-rh)float("inf")
for timeout. (PR #210 by
dmitry-moroz)