Confluent's Kafka Python Client
describe_cluster()
and describe_topics()
. (@jainruchir, #1635)list_offsets
(#1576).Rack
to the Node
type, so AdminAPI calls can expose racks for brokers
(currently, all Describe Responses) (#1635, @jainruchir).confluent-kafka-python is based on librdkafka v2.3.0, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
confluent-kafka-python is based on librdkafka v2.2.0, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
confluent-kafka-python is based on librdkafka v2.1.1, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
set_sasl_credentials
. This new method (on the Producer, Consumer, and AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that will be used for subsequent (new) connections to a broker (#1511).confluent-kafka-python is based on librdkafka v2.1.0, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
list_consumer_groups
Admin operation. Supports listing by state.describe_consumer_groups
Admin operation. Supports multiple groups.delete_consumer_groups
Admin operation. Supports multiple groups.list_consumer_group_offsets
Admin operation. Currently, only supports 1 group with multiple partitions. Supports require_stable option.alter_consumer_group_offsets
Admin operation. Currently, only supports 1 group with multiple offsets.normalize.schemas
configuration property to Schema Registry client (@rayokota, #1406)TopicPartition
type and commit()
(#1410).consumer.memberid()
for getting member id assigned to
the consumer in a consumer group (#1154).nb_bool
method for the Producer, so that the default (which uses len)
will not be used. This avoids situations where producers with no enqueued items would
evaluate to False (@vladz-sternum, #1445).AvroProducer
and AvroConsumer
. Use AvroSerializer
and AvroDeserializer
instead.list_groups
. Use list_consumer_groups
and describe_consumer_groups
instead.OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legacy ciphers need to be explicitly configured to continue working, but it is highly recommended NOT to use them. The rest of the API remains backward compatible.
confluent-kafka-python is based on librdkafka v2.0.2, see the librdkafka v2.0.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no v2.0.0 and v2.0.1 releases.
confluent-kafka-python is based on librdkafka v1.9.2, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
confluent-kafka-python is based on librdkafka v1.9.0, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.8.2 is a maintenance release with the following fixes and enhancements:
use.deprecated.format
to ProtobufSerializer
and ProtobufDeserializer
.
See Upgrade considerations below for more information.use.latest.version
and skip.known.types
(Protobuf) to the Serializer classes. (Robert Yokota, #1133).list_topics()
and list_groups()
added to AdminClient.confluent-kafka-python is based on librdkafka v1.8.2, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no v1.8.0 and v1.8.1 releases.
Prior to this version the confluent-kafka-python client had a bug where nested protobuf schemas indexes were incorrectly serialized, causing incompatibility with other Schema-Registry protobuf consumers and producers.
This has now been fixed, but since the old defect serialization and the new correct serialization are mutually incompatible the user of confluent-kafka-python will need to make an explicit choice which serialization format to use during a transitory phase while old producers and consumers are upgraded.
The ProtobufSerializer
and ProtobufDeserializer
constructors now both take a (for the time being) configuration dictionary that requires
the use.deprecated.format
configuration property to be explicitly set.
Producers should be upgraded first and as long as there are old (<=v1.7.0) Python consumers reading from topics being produced to, the new (>=v1.8.2) Python producer must be configured with use.deprecated.format
set to True
.
When all existing messages in the topic have been consumed by older consumers the consumers should be upgraded and both new producers and the new consumers must set use.deprecated.format
to False
.
The requirement to explicitly set use.deprecated.format
will be removed in a future version and the setting will then default to False
(new format).
v1.7.0 is a maintenance release with the following fixes and enhancements:
confluent-kafka-python is based on librdkafka v1.7.0, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.6.1 is a feature release:
return_record_name=True
to AvroDeserializer (@slominskir, #1028)schema.Parse
call (@casperlehmann, #1006).**kwargs
to legacy AvroProducer and AvroConsumer constructors to
support all Consumer and Producer base class constructor arguments, such
as logger
(@venthur, #699).producer.flush()
could return a non-zero value without hitting the specified timeout.confluent-kafka-python is based on librdkafka v1.6.1, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.