Rabbitmq Server Versions Save

Open source RabbitMQ: core server and tier 1 (built-in) plugins

v3.13.2

1 week ago

RabbitMQ 3.13.2

RabbitMQ 3.13.2 is a maintenance release in the 3.13.x release series. Starting June 1st, 2024, community support for this series will only be provided to eligible non-paying users.

Please refer to the upgrade section from the 3.13.0 release notes if upgrading from a version prior to 3.13.0.

This release requires Erlang 26 and supports Erlang versions up to 26.2.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.13.0, RabbitMQ requires Erlang 26. Nodes will fail to start on older Erlang releases.

Users upgrading from 3.12.x (or older releases) on Erlang 25 to 3.13.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the v3.12.0 release notes and v3.13.0 release notes first.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Core Broker

Bug Fixes

  • Several Quorum queues WAL and segment file operations are now more resilient to certain filesystem operation failures.

    GitHub issue: #11113

  • Classic queues v2 could run into an exception after a node restart.

    GitHub issue: #11111

  • Peer discovery failed in at some IPv6-only environments. This behavior was new in 3.13.x.

    GitHub issue: #10728

  • rabbitmqctl stop_app is now faster, in particular for nodes that are not under significant load.

    GitHub issue: #11075

  • x-death counter was not incremented for messages that expired due to message TTL. This behavior was new in 3.13.x.

    GitHub issue: #10709

  • Quorum queue replica removal now more resilient in clusters under close to peak load, a condition that can trigger timeouts for certain operations involving multiple nodes.

    GitHub issue: #11065

  • rabbitmq-server (the shell script) now propagetes the exit code from the runtime process.

    Contributed by @giner.

    GitHub issue: #10819

Enhancements

  • Definition import did not handle a scenario where some virtual hosts did not have the default queue type metadata key set.

    GitHub issue: #10897

  • When a virtual host is deleted, several more internal events are emitted: for example, the events related to removal of user permissions and runtime parameters associated with the virtual host.

    GitHub issue: #11077

CLI Tools

Bug Fixes

  • rabbitmqctl list_unresponsive_queues now supports the (queue) type column.

    Contributed by @aaron-seo.

    GitHub issue: #11081

MQTT Plugin

Bug Fixes

  • MQTT clients that did not configure a will (message) delay interval could run into an exception due to an unnecessary permission check on the will target.

    GitHub issue: #11024

  • Messages published by MQTT clients were missing the timestamp_in_ms (the more precise header). This behavior was new in 3.13.x.

    GitHub issue: #10925

  • Messages published using QoS 0 were unintentionally marked as durable internally.

    GitHub issue: #11012

Management Plugin

Bug Fixes

  • GET /api/queues/{vhost}/{name} could return duplicate keys for quorum queues.

    GitHub issue: #10929

  • Several endpoints responded with a 500 instead of a 404 when target virtual host was non-existent.

    Partially contributed by @LoisSotoLopez.

    GitHub issue: #10901

OAuth 2 AuthN/AuthZ Plugin

Enhancements

Kubernetes Peer Discovery Plugin

Enhancements

  • More TLS client settings now can be configured:

    cluster_formation.k8s.tls.cacertfile = /path/to/kubernetes/api/ca/certificate.pem
    cluster_formation.k8s.tls.certfile = /path/to/client/tls/certificate.pem
    cluster_formation.k8s.tls.keyfile = /path/to/client/tls/private_key.pem
    
    cluster_formation.k8s.tls.verify = verify_peer
    cluster_formation.k8s.tls.fail_if_no_peer_cert = true
    
    

    GitHub issue: #10916

JMS Topic Exchange Plugin

Enhancements

  • The plugin now stores its state on multiple nodes.

    GitHub issue: #11091

Shovel Plugin

Bug Fixes

  • Shovel metrics and internal state are now deleted when their shovel is, regardless of what node it was hosted on and what node was targeted by the deleting (CLI or HTTP API) operation.

    GitHub issue: #11101

  • rabbitmqctl list_shovels CLI command now will list shovels running on all cluster nodes and not just the target node.

    GitHub issue: #11119

Dependency Changes

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.2.tar.xz instead of the source tarball produced by GitHub.

v3.13.1

1 month ago

RabbitMQ 3.13.1

RabbitMQ 3.13.1 is a maintenance release in the 3.13.x release series. Starting June 1st, 2024, community support for this series will only be provided to eligible non-paying users.

Please refer to the upgrade section from the 3.13.0 release notes if upgrading from a version prior to 3.13.0.

This release requires Erlang 26 and supports Erlang versions up to 26.2.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.13.0, RabbitMQ requires Erlang 26. Nodes will fail to start on older Erlang releases.

Users upgrading from 3.12.x (or older releases) on Erlang 25 to 3.13.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the v3.12.0 release notes and v3.13.0 release notes first.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Core Broker

Bug Fixes

  • Classic queue v2 message store compaction could fail behind under high enough load, significantly increasing node's disk space footprint.

    GitHub issues: #10696, #10681

  • Improved quorum queue safety in mixed version clusters.

    GitHub issue: #10664

  • When Khepri was enabled and virtual host recovery failed, subsequent recovery attempts also failed.

    GitHub issue: #10742

  • Messages published without any headers set on them did not have a header property set on them. This change compared to 3.12.x was not intentional.

    GitHub issues: #10623, #10620

  • Free disk space monitor on Windows ran into an exception if external call to win32sysinfo.exe timed out.

    GitHub issue: #10597

Enhancements

  • channel_max_per_node is a new per-node limit that allows to put a cap on the number of AMQP 0-9-1 channels that can be concurrently opened by all clients connected to a node:

    # rabbitmq.conf
    channel_max_per_node = 5000
    

    This is a guardrail mean to protect nodes from application-level channel leaks.

    Contributed by @illotum (AWS).

    GitHub issue: #10754

Stream Plugin

Bug Fixes

  • Avoids a Windows-specific stream log corruption that affected some deployments.

    GitHub issue: #10822

  • When a super stream cannot be created because of a duplicate partition name, a more informative error message is now used.

    GitHub issue: #10535

CLI Tools

Bug Fixes

  • rabbitmq-plugins list --formatter=json --silent will no longer emit any warnings when some of the plugins in the enabled plugins file are missing.

    Contributed by @Ayanda-D.

    GitHub issue: #10870

OAuth 2 Plugin

Bug Fixes

  • Configuring a JWKS URL without specifying a CA certificate resulted in an exception with Erlang 26's TLS implementation.

    GitHub issue: #8547

Management Plugin

Bug Fixes

  • Set default sort query parameter value for better compatibility with an external Prometheus scraper. Note that the built-in Prometheus plugin is the recommended way of monitoring RabbitMQ using Prometheus-compatible tools.

    GitHub issue: #10610

  • When a tab (Connections, Queues and Streams, etc) is switched, a table configuration pane from the previously selected tab is now hidden.

    Contributed by @ackepenek.

    GitHub issue: #10799

Enhancements

  • GET /api/queues/{vhost}/{name} now supports enable_queue_totals as well as disable_stats. This combination of query parameters can be used to retrieve message counters while greatly reducing the number of metrics returned by the endpoints.

    Contributed by @aaron-seo (AWS).

    GitHub issue: #10839

Federation Plugin

Enhancements

  • Exchange federation now can be configured to use a custom queue type for their internal buffers.

    To use a quorum queue, set the queue-type federation policy key to quorum.

    GitHub issues: #4683, #10663

  • rabbitmq_federation_running_link_count is a new metric provided via Prometheus.

    GitHub issue: #10345

Dependency Changes

  • osiris was updated to 1.8.1
  • khepri was upgraded to 0.13.0
  • cowboy was updated to 2.12.0

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.1.tar.xz instead of the source tarball produced by GitHub.

v3.13.0

2 months ago

RabbitMQ 3.13.0

RabbitMQ 3.13.0 is a new feature release.

The 3.13.x release series are covered by community support through March 1, 2025 and by extended commercial support through September 1, 2025.

Highlights

This release includes several new features, optimizations, internal changes in preparation for RabbitMQ 4.x, and a major update to the RabbitMQ website.

The user-facing areas that have seen the biggest improvements in this release are

  • Khepri now can be used as an alternative schema data store in RabbitMQ, replacing Mnesia
  • MQTTv5 support
  • Support for consumer-side stream filtering
  • A new common message container format used internally, based on the AMQP 1.0 message format
  • Improved classic non-mirrored queue performance with message sizes larger than 4 KiB (or a different customized CQ index embedding threshold)
  • Classic queues storage implementation version 2 (CQv2) is now highly recommended for all new deployments. CQv2 meaningfully improves performance of non-mirrored classic queues for most workloads

See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.

Release Artifacts

RabbitMQ releases are distributed via GitHub. Debian and RPM packages are available via Cloudsmith mirrors.

Community Docker image, Chocolatey package, and the Homebrew formula are other installation options. They are updated with a delay.

Erlang/OTP Compatibility Notes

This release requires Erlang 26.x.

Provisioning Latest Erlang Releases explains what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.

Upgrading to 3.13

Documentation guides on upgrades

See the Upgrading guide for documentation on upgrades and GitHub releases for release notes of individual releases.

Note that since 3.12.0 requires all feature flags to be enabled before upgrading, there is no upgrade path from 3.11.24 (or a later patch release) straight to 3.13.0.

Required Feature Flags

This release does not graduate any feature flags.

However, all users are highly encouraged to enable all feature flags before upgrading to this release from 3.12.x.

Mixed version cluster compatibility

RabbitMQ 3.13.0 nodes can run alongside 3.12.x nodes. 3.13.x-specific features can only be made available when all nodes in the cluster upgrade to 3.13.0 or a later patch release in the new series.

While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below. Once all nodes are upgraded to 3.13.0, these irregularities will go away.

Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).

Recommended Post-upgrade Procedures

Switch Classic Queues to CQv2

We recommend switching classic queues to CQv2 after all cluster nodes have been upgrades, at first using policies, and then eventually using a setting in rabbitmq.conf. Upgrading classic queues to CQv2 at boot time using the configuration file setting can be potentially unsafe in environments where deprecated classic mirrored queues still exist.

For new clusters, adopting CQv2 from the start is highly recommended:

# CQv2 should be used by default for all new clusters
classic_queue.default_version = 2

Compatibility Notes

This release includes a few potentially breaking changes.

Minimum Supported Erlang Version

Starting with this release, RabbitMQ requires Erlang 26.x. Nodes will fail to start on older Erlang releases.

Client Library Compatibility

Client libraries that were compatible with RabbitMQ 3.11.x and 3.12.x will be compatible with 3.13.0. RabbitMQ Stream Protocol clients must be upgraded to their latest versions in order to support the stream filtering feature introduced in this release.

Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia

Khepri has an important difference from Mnesia when it comes to schema modifications such as queue or stream declarations, or binding declarations. These changes won't be noticeable with many workloads but can affect some, in particular, certain integration tests.

Consider two scenarios, A and B.

Scenario A

There is only one client. The client performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It publishes a message M to the exchange X
  4. It expects the message to be routed to queue Q
  5. It consumes the message

In this scenario, there should be no observable difference in behavior. Client's expectations will be met.

Scenario B

There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host. Node R2 has no client connections.

Client One performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It gets a queue declaration confirmation back
  4. It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
  5. Client Two publishes a message M to X
  6. Clients One and Two expect the message to be routed to Q

In this scenario, on step three Mnesia would return when all cluster nodes have committed an update. Khepri, however, will return when a majority of nodes, including the node handling Client One's operations, have returned.

This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3 in the above example is not guaranteed not be routed.

Once all schema changes propagate to node R3, Client Two's subsequent publishes on node R3 will be guaranteed to be routed.

This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes can be considered a succeess.

Workaround Strategies

To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas) queries of bindings when routing messages but that would have a significant impact on throughput of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).

Applications that rely on multiple connections that depend on a shared topology have several coping strategies.

If an application uses two or more connections to different nodes, it can declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with other operations.

Applications that rely on dynamic topologies can switch to use a "static" set of exchanges and bindings.

Application components that do not need to use a shared topology can each configure its own queues/streams/bindings.

Test suites that use multiple connections to different nodes can choose to use just one connection or connect to the same node, or inject a pause, or await a certain condition that indicates that the topology is in place.

TLS Defaults

Starting with Erlang 26, client side TLS peer certificate chain verification settings are enabled by default in most contexts: from federation links to shovels to TLS-enabled LDAP client connections.

If using TLS peer certificate chain verification is not practical or necessary, it can be disabled. Please refer to the docs of the feature in question, for example, this one on TLS-enabled LDAP client connections.

Management Plugin and HTTP API

GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in up to 25% in traffic saving.

MQTT Plugin

mqtt.subscription_ttl (in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds (in seconds). A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set. For example, if you set mqtt.subscription_ttl = 3600000 (1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600 (1 hour) in 3.13.

rabbitmqctl node_health_check is Now a No-Op

rabbitmqctl node_health_check has been deprecated for over three years and is now a no-op (does nothing).

See the Health Checks section in the monitoring guide to find out what modern alternatives are available.

openSUSE Leap Package is not Provided

An openSUSE Leap package will not be provided with this release of RabbitMQ.

This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory but the package depends on glibc 2.34, and all currently available openSUSE Leap releases (up to 15.5) ship with 2.31 at most.

Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.x-compatible Erlang 26 package becomes publicly available.

Getting Help

Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or on our community Discord.

Changes Worth Mentioning

Release notes are kept under rabbitmq-server/release-notes.

A New RabbitMQ Website

This 3.13.0 release includes a change to the RabbitMQ website, a major update of the website.

Some of it's great features include:

  • Access to doc guides for multiple release series: 3.13.x and 3.12.x, with more versions coming as new RabbitMQ release series come out
  • A reworked table of contents and navigation
  • Search over both doc guides and blog content

Note: We hope you enjoy the new website, more improvements are coming soon, we are revising the documentation table of contents that you see now and also adding some navigational topics to help you move around and find the documentation you are looking for faster in the future. We will keep you posted!

Core Server

Enhancements

  • Khepri now can be used as an alternative schema data store in RabbitMQ, by enabling a feature flag:

    rabbitmqctl enable_feature_flag khepri_db
    

    In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store that will predictably recover from network partitions and node failures, the same way quorum queues and streams already do. At the same time, this means that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.

    Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.

    Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.

    GitHub issue: #7206

  • Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core: a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state, and other purposes.

    AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container structure internally.

    This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ, and that most cross-protocol translation rough edges can be smoothened.

    GitHub issue: #5077

  • Target quorum queue replica state is now continuously reconciled.

    When the number of online replicas of a quorum queue goes below (or above) its target, new replicas will be automatically placed if enough cluster nodes are available. This is a more automatic version of how quorum queue replicas have originally been grown.

    For automatic shrinking of queue replicas, the user must opt in.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #8218

  • Revisited peer discovery implementation that further reduces the probability of two or more sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.

    GitHub issue: #9797

  • Classic queue storage v2 (CQv2) has matured and is now recommended for all users.

    We recommend switching classic queues to CQv2 after all cluster nodes have been upgraded to 3.13.0, at first using policies, and then eventually using a setting in rabbitmq.conf. Upgrading classic queues to CQv2 at boot time using the configuration file setting can be potentially unsafe in environments where deprecated classic mirrored queues still exist.

    For new clusters, adopt CQv2 from the start by setting classic_queue.default_version in rabbitmq.conf:

    # only set this value for new clusters or after all nodes have been upgraded to 3.13
    classic_queue.default_version = 2
    
  • Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.

    GitHub issue: #6090, #8507

  • When a non-mirrored classic queue is declared, its placement node is now selected with less interaction with cluster peers, speeding up the process when some nodes have recently gone down.

    GitHub issue: #10102

  • A subsystem for marking features as deprecated.

    GitHub issue: #7390

  • Plugins now can register custom queue types. This means that a plugin now can provide a custom queue type.

    Contributed by @luos (Erlang Solutions).

    GitHub issues: #8834, #8927

  • Classic queue storage version now can be set via operator policies.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #9541

  • channel_max_per_node allows limiting how many channels a node would allow clients to open, in total.

    This limit is easier to reason about than the per-node connection limit multiplied by channel_max (a per-connection limit for channels).

    Contributed by @SimonUnge (AWS).

    GitHub issue: #10351

  • disk_free_limit.absolute and vm_memory_high_watermark.absolutenow support more information units:Mi, Gi, TB, Ti, PB, Pi`.

    In addition, there were some renaming of the existing keys:

    • K now means "kilobyte" and not "kibibyte"
    • M now means "megabyte" and not "mebibyte"
    • G now means "gigabyte" and not "gibibyte"

    There is no consensus on how these single letter suffixes should be interpreted (as their power of 2 or power of 10 variants), so RabbitMQ has adopted a widely used convention adopted by Kubernetes.

    GitHub issues: #10310, #10348

  • Improved efficiency of definition imports in environments with a lot of virtual hosts.

    Contributed by @AyandaD.

    GitHub issue: #10320

  • When a consumer delivery timeout hits, a more informative message is now logged.

    Contributed by @rluvaton.

    GitHub issue: #10446

Bug Fixes

This release includes all bug fixes shipped in the 3.12.x series.

  • Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.

    GitHub issue: #8477

  • Feature flag discovery operations will now be retried multiple times in case of network failures.

    GitHub issue: #8491

  • Feature flag state is now persisted in a safer way during node shutdown.

    GitHub issue: #10279

  • Feature flag synchronization between nodes now avoids a potential race condition.

    GitHub issue: #10027

  • The state of node maintenance status across the cluster is now replicated. It previously was accessible to all nodes but not replicated.

    GitHub issue: #9005

CLI Tools

Deprecations

  • rabbitmqctl rename_cluster_node and rabbitmqctl update_cluster_nodes are now no-ops.

    They were not safe to use with quorum queues and streams, and are completely incompatible with Khepri.

    GitHub issue: #10369

Enhancements

  • rabbitmq-diagnostics cluster_status now responds significantly faster when some cluster nodes are not reachable.

    GitHub issue: #10101

  • rabbitmqctl list_deprecated_features is a new command that lists deprecated features that are deprecated used on the target node.

    GitHub issues: #9901, #7390

Management Plugin

Enhancements

  • New API endpoint, GET /api/stream/{vhost}/{name}/tracking, can be used to track publisher and consumer offsets in a stream.

    GitHub issue: #9642

  • Several rarely used queue metrics were removed to reduce inter-node data transfers and CPU burn during API response generation. The effects will be particularly pronounced for the GET /api/queues endpoint used without filtering or pagination, which can produce enormously large responses.

    A couple of relevant queue metrics or state fields were lifted to the top level.

    This is a potentially breaking change.

    Note that Prometheus is the recommended option for monitoring, not the management plugin's HTTP API.

    GitHub issues: #9437, #9578, #9633

Bug Fixes

  • GET /api/nodes/{name} failed with a 500 when called with a non-existed node name.

    GitHub issue: #10330

Stream Plugin

Enhancements

  • Support for (consumer) stream filtering.

    This allows consumers that are only interested in a subset of data in a stream to receive less data. Note that false positives are possible, so this feature should be accompanied by client library or application-level filtering.

    GitHub issue: #8207

  • Stream connections now support JWT (OAuth 2) token renewal. The renewal is client-initiated shortly before token expiry. Therefore, this feature requires stream protocol clients to be updated.

    GitHub issue: #9187

  • Stream connections are now aware of JWT (OAuth 2) token expiry.

    GitHub issue: #10292

Bug Fixes

  • Stream (replica) membership changes safety improvements.

    GitHub issue: #10331

  • Stream protocol connections now complies with the connection limit in target virtual host.

    GitHub issue: #9946

MQTT Plugin

Enhancements

  • Support for MQTTv5 (with limitations).

    GitHub issues: #7263, #8681

  • MQTT clients that use QoS 0 now can reconnect more reliably when the node they were connected to fails.

    GitHub issue: #10203

  • Negative message acknowledgements are now propagated to MQTTv5 clients.

    GitHub issue: #9034

  • Potential incompatibility: mqtt.subscription_ttl configuration was replaced with mqtt.max_session_expiry_interval_seconds that targets MQTTv5.

    GitHub issue: #8846

AMQP 1.0 Plugin

Bug Fixes

  • During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as x-correlation-id (instead of x-correlation) for values longer than 255 bytes.

    This is a potentially breaking change.

    GitHub issue: #8680

  • AMQP 1.0 connections are now throttled when the node hits a resource alarm.

    GitHub issue: #9953

OAuth 2 AuthN and AuthZ Backend Plugin

Enhancements

  • RabbitMQ nodes now allow for multiple OAuth 2 resources to be configured. Each resource can use a different identity provider (for example, one can be powered by Keycloak and another by Azure Active Directory).

    This allows for identity provider infrastructure changes (say, provider A is replaced with provider B over time) that do not affect RabbitMQ's ability to authenticate clients and authorize the operations they attempt to perform.

    GitHub issue: #10012

  • The plugin now performs discovery of certain properties for OpenID-compliant identity providers.

    GitHub issue: #10012

Peer Discovery AWS Plugin

Enhancements

  • It is now possible to override how node's hostname is extracted from AWS API responses during peer discovery.

    This is done using cluster_formation.aws.hostname_path, a collection of keys that will be used to traverse the response and extract nested values from it. The list is comma-separated.

    The default value is a single value list, privateDnsName.

    Contributed by @illotum (AWS).

    GitHub issue: #10097

Dependency Changes

  • ra was upgraded to 2.9.1
  • osiris was updated to 1.7.2
  • cowboy was updated to 2.11.0

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz instead of the source tarball produced by GitHub.

v3.12.13

2 months ago

RabbitMQ 3.12.13 is a maintenance release in the 3.12.x release series. This series is covered by community support through June 30, 2024 and extended commercial support through Dec 31, 2024.

Please refer to the upgrade section from the 3.12.0 release notes if upgrading from a version prior to 3.12.0.

This release requires Erlang 25 and supports Erlang versions up to 26.2.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.12.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases.

Users upgrading from 3.11.x (or older releases) on Erlang 25 to 3.12.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the v3.12.0 release notes first.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Core Broker

Bug Fixes

  • When a channel is closed, its consumer metric samples will now be cleared differently depending on the number of them. In #9356, it was over optimized for the uncommon case with a very large number of consumers per channel, hurting the baseline case with one or a few consumers per channel.

    In part contributed by @SimonUnge (AWS).

    GitHub issue: #10478

CLI Tools

Enhancement

  • CLI tool startup time was reduced.

    GitHub issue: #10461

Bug Fixes

  • JSON output formatter now avoids ANSI escape sequences.

    Contributed by @ariel-anieli.

    GitHub issue: #8557

  • ANSI escape sequences are no longer used on Windows.

    Contributed by @ariel-anieli.

    GitHub issue: #2634

Stream Plugin

Bug Fixes

  • If a stream publisher cannot be set up, a clearer message will be logged.

    GitHub issue: #10524

Management Plugin

Bug Fixes

  • GET /api/nodes/{name} failed with a 500 when called with a non-existed node name.

    GitHub issue: #10330

Shovel Plugin

Bug Fixes

  • AMQP 1.0 Shovels will no longer set a delivery mode header that is not meaningful in AMQP 1.0.

    Contributed by @luos (Erlang Solutions).

    GitHub issue: #10503

Federation Plugin

Bug Fixes

  • Upstream node shutdown could produce a scary looking exception in the log.

    GitHub issue: #10473

  • Exchange federation links could run into an exception.

    Contributed by @gomoripeti (CloudAMQP).

    GitHub issue: #10305

Dependency Changes

  • cowboy was updated to 2.11.0

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.12.13.tar.xz instead of the source tarball produced by GitHub.

v3.13.0-rc.5

2 months ago

RabbitMQ 3.13.0-rc.5

RabbitMQ 3.13.0-rc.5 is a candidate of a new feature release.

Highlights

This release includes several new features and optimizations.

The user-facing areas that have seen the biggest improvements in this release are

See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.

Release Artifacts

RabbitMQ preview releases are distributed via GitHub.

Community Docker image is another installation option for previews. It is updated with a delay (usually a few days).

Erlang/OTP Compatibility Notes

This release requires Erlang 26.0 or later.

Provisioning Latest Erlang Releases explains what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.

Upgrading to 3.13

Documentation guides on upgrades

See the Upgrading guide for documentation on upgrades and RabbitMQ change log for release notes of other releases.

Note that since 3.12.0 requires all feature flags to be enabled before upgrading, there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.

Required Feature Flags

This release does not graduate any feature flags.

However, all users are highly encouraged to enable all feature flags before upgrading to this release from 3.12.x.

Mixed version cluster compatibility

RabbitMQ 3.13.0 nodes can run alongside 3.12.x nodes. 3.13.x-specific features can only be made available when all nodes in the cluster upgrade to 3.13.0 or a later patch release in the new series.

While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below. Once all nodes are upgraded to 3.13.0, these irregularities will go away.

Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).

Recommended Post-upgrade Procedures

Switch Classic Queues to CQv2

We recommend switching classic queues to CQv2 after all cluster nodes have been upgrades, at first using policies, and then eventually using a setting in rabbitmq.conf. Upgrading classic queues to CQv2 at boot time using the configuration file setting can be potentially unsafe in environments where deprecated classic mirrored queues still exist.

For new clusters, adopting CQv2 from the start is highly recommended:

classic_queue.default_version = 2

Compatibility Notes

This release includes a few potentially breaking changes&

Minimum Supported Erlang Version

Starting with this release, RabbitMQ requires Erlang 26.0 or later versions. Nodes will fail to start on older Erlang releases.

Client Library Compatibility

Client libraries that were compatible with RabbitMQ 3.12.x will be compatible with 3.13.0. RabbitMQ Stream Protocol clients must be upgraded to use the stream filtering feature introduced in this release.

Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia

Khepri has an important difference from Mnesia when it comes to schema modifications such as queue or stream declarations, or binding declarations. These changes won't be noticeable with many workloads but can affect some, in particular, certain integration tests.

Consider two scenarios, A and B.

Scenario A

There is only one client. The client performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It publishes a message M to the exchange X
  4. It expects the message to be routed to queue Q
  5. It consumes the message

In this scenario, there should be no observable difference in behavior. Client's expectations will be met.

Scenario B

There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host. Node R2 has no client connections.

Client One performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It gets a queue declaration confirmation back
  4. It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
  5. Client Two publishes a message M to X
  6. Clients One and Two expect the message to be routed to Q

In this scenario, on step three Mnesia would return when all cluster nodes have committed an update. Khepri, however, will return when a majority of nodes, including the node handling Client One's operations, have returned.

This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3 in the above example is not guaranteed not be routed.

Once all schema changes propagate to node R3, Client Two's subsequent publishes on node R3 will be guaranteed to be routed.

This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes can be considered a succeess.

Workaround Strategies

To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas) queries of bindings when routing messages but that would have a significant impact on throughput of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).

Applications that rely on multiple connections that depend on a shared topology have several coping strategies.

If an application uses two or more connections to different nodes, it can declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with other operations.

Applications that rely on dynamic topologies can switch to use a "static" set of exchanges and bindings.

Application components that do not need to use a shared topology can each configure its own queues/streams/bindings.

Test suites that use multiple connections to different nodes can choose to use just one connection or connect to the same node, or inject a pause, or await a certain condition that indicates that the topology is in place.

Management Plugin and HTTP API

GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in 25% in traffic saving.

MQTT Plugin

mqtt.subscription_ttl (in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds (in seconds). A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set. For example, if you set mqtt.subscription_ttl = 3600000 (1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600 (1 hour) in 3.13.

rabbitmqctl node_health_check is Now a No-Op

rabbitmqctl node_health_check has been deprecated for over three years and is now an no-op (does nothing).

See the Health Checks section in the monitoring guide to find out what modern alternatives are available.

openSUSE Leap Package is not Provided

An openSUSE Leap package will not be provided with this release of RabbitMQ.

This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory but the package depends on glibc 2.34, and all currently available openSUSE Leap releases (up to 15.5) ship with 2.31 at most.

Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26 package becomes publicly available.

Getting Help

Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or on our community Discord.

Changes Worth Mentioning

Release notes are kept under rabbitmq-server/release-notes.

Core Server

Enhancements

  • Khepri now can be used as an alternative schema data store in RabbitMQ, by enabling a feature flag:

    rabbitmqctl enable_feature_flag khepri_db
    

    In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store that will predictably recover from network partitions and node failures, the same way quorum queues and streams already do. At the same time, this means that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.

    Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.

    Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.

    GitHub issue: #7206

  • Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core: a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state, and other purposes.

    AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container structure internally.

    This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ, and that most cross-protocol translation rough edges can be smoothened.

    GitHub issue: #5077

  • Target quorum queue replica state is now continuously reconciled.

    When the number of online replicas of a quorum queue goes below (or above) its target, new replicas will be automatically placed if enough cluster nodes are available. This is a more automatic version of how quorum queue replicas have originally been grown.

    For automatic shrinking of queue replicas, the user must opt in.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #8218

  • Revisited peer discovery implementation that further reduces the probability of two or more sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.

    GitHub issue: #9797

  • Classic queue storage v2 (CQv2) has matured and is now recommended for all users.

    We recommend switching classic queues to CQv2 after all cluster nodes have been upgraded to 3.13.0, at first using policies, and then eventually using a setting in rabbitmq.conf. Upgrading classic queues to CQv2 at boot time using the configuration file setting can be potentially unsafe in environments where deprecated classic mirrored queues still exist.

    For new clusters, adopt CQv2 from the start by setting classic_queue.default_version in rabbitmq.conf:

    # only set this value for new clusters or after all nodes have been upgraded to 3.13
    classic_queue.default_version = 2
    
  • Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.

    GitHub issue: #6090, #8507

  • When a non-mirrored classic queue is declared, its placement node is now selected with less interaction with cluster peers, speeding up the process when some nodes have recently gone down.

    GitHub issue: #10102

  • A subsystem for marking features as deprecated.

    GitHub issue: #7390

  • Plugins now can register custom queue types. This means that a plugin now can provide a custom queue type.

    Contributed by @luos (Erlang Solutions).

    GitHub issues: #8834, #8927

  • Classic queue storage version now can be set via operator policies.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #9541

  • channel_max_per_node allows limiting how many channels a node would allow clients to open, in total.

    This limit is easier to reason about than the per-node connection limit multiplied by channel_max (a per-connection limit for channels).

    Contributed by @SimonUnge (AWS).

    GitHub issue: #10351

  • disk_free_limit.absolute and vm_memory_high_watermark.absolutenow support more information units:Mi, Gi, TB, Ti, PB, Pi`.

    In addition, there were some renaming of the existing keys:

    • K now means "kilobyte" and not "kibibyte"
    • M now means "megabyte" and not "mebibyte"
    • G now means "gigabyte" and not "gibibyte"

    There is no consensus on how these single letter suffixes should be interpreted (as their power of 2 or power of 10 variants), so RabbitMQ has adopted a widely used convention adopted by Kubernetes.

    GitHub issues: #10310, #10348

  • Improved efficiency of definition imports in environments with a lot of virtual hosts.

    Contributed by @AyandaD.

    GitHub issue: #10320

  • When a consumer delivery timeout hits, a more informative message is now logged.

    Contributed by @rluvaton.

    GitHub issue: #10446

Bug Fixes

This release includes all bug fixes shipped in the 3.12.x series.

  • Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.

    GitHub issue: #8477

  • Feature flag discovery operations will now be retried multiple times in case of network failures.

    GitHub issue: #8491

  • Feature flag state is now persisted in a safer way during node shutdown.

    GitHub issue: #10279

  • Feature flag synchronization between nodes now avoids a potential race condition.

    GitHub issue: #10027

  • The state of node maintenance status across the cluster is now replicated. It previously was accessible to all nodes but not replicated.

    GitHub issue: #9005

CLI Tools

Deprecations

  • rabbitmqctl rename_cluster_node and rabbitmqctl update_cluster_nodes are now no-ops.

    They were not safe to use with quorum queues and streams, and are completely incompatible with Khepri.

    GitHub issue: #10369

Enhancements

  • rabbitmq-diagnostics cluster_status now responds significantly faster when some cluster nodes are not reachable.

    GitHub issue: #10101

  • rabbitmqctl list_deprecated_features is a new command that lists deprecated features that are deprecated used on the target node.

    GitHub issues: #9901, #7390

Management Plugin

Enhancements

  • New API endpoint, GET /api/stream/{vhost}/{name}/tracking, can be used to track publisher and consumer offsets in a stream.

    GitHub issue: #9642

  • Several rarely used queue metrics were removed to reduce inter-node data transfers and CPU burn during API response generation. The effects will be particularly pronounced for the GET /api/queues endpoint used without filtering or pagination, which can produce enormously large responses.

    A couple of relevant queue metrics or state fields were lifted to the top level.

    This is a potentially breaking change.

    Note that Prometheus is the recommended option for monitoring, not the management plugin's HTTP API.

    GitHub issues: #9437, #9578, #9633

Bug Fixes

  • GET /api/nodes/{name} failed with a 500 when called with a non-existed node name.

    GitHub issue: #10330

Stream Plugin

Enhancements

  • Support for (consumer) stream filtering.

    This allows consumers that are only interested in a subset of data in a stream to receive less data. Note that false positives are possible, so this feature should be accompanied by client library or application-level filtering.

    GitHub issue: #8207

  • Stream connections now support JWT (OAuth 2) token renewal. The renewal is client-initiated shortly before token expiry. Therefore, this feature requires stream protocol clients to be updated.

    GitHub issue: #9187

  • Stream connections are now aware of JWT (OAuth 2) token expiry.

    GitHub issue: #10292

Bug Fixes

  • Stream (replica) membership changes safety improvements.

    GitHub issue: #10331

  • Stream protocol connections now complies with the connection limit in target virtual host.

    GitHub issue: #9946

MQTT Plugin

Enhancements

  • Support for MQTTv5 (with limitations).

    GitHub issues: #7263, #8681

  • MQTT clients that use QoS 0 now can reconnect more reliably when the node they were connected to fails.

    GitHub issue: #10203

  • Negative message acknowledgements are now propagated to MQTTv5 clients.

    GitHub issue: #9034

  • Potential incompatibility: mqtt.subscription_ttl configuration was replaced with mqtt.max_session_expiry_interval_seconds that targets MQTTv5.

    GitHub issue: #8846

AMQP 1.0 Plugin

Bug Fixes

  • During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as x-correlation-id (instead of x-correlation) for values longer than 255 bytes.

    This is a potentially breaking change.

    GitHub issue: #8680

  • AMQP 1.0 connections are now throttled when the node hits a resource alarm.

    GitHub issue: #9953

OAuth 2 AuthN and AuthZ Backend Plugin

Enhancements

  • RabbitMQ nodes now allow for multiple OAuth 2 resources to be configured. Each resource can use a different identity provider (for example, one can be powered by Keycloak and another by Azure Active Directory).

    This allows for identity provider infrastructure changes (say, provider A is replaced with provider B over time) that do not affect RabbitMQ's ability to authenticate clients and authorize the operations they attempt to perform.

    GitHub issue: #10012

  • The plugin now performs discovery of certain properties for OpenID-compliant identity providers.

    GitHub issue: #10012

Peer Discovery AWS Plugin

Enhancements

  • It is now possible to override how node's hostname is extracted from AWS API responses during peer discovery.

    This is done using cluster_formation.aws.hostname_path, a collection of keys that will be used to traverse the response and extract nested values from it. The list is comma-separated.

    The default value is a single value list, privateDnsName.

    Contributed by @illotum (AWS).

    GitHub issue: #10097

Dependency Changes

  • ra was upgraded to 2.9.1
  • osiris was updated to 1.7.2
  • cowboy was updated to 2.11.0

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz instead of the source tarball produced by GitHub.

v3.12.12

4 months ago

RabbitMQ 3.12.12 is a maintenance release in the 3.12.x release series.

Please refer to the upgrade section from the 3.12.0 release notes if upgrading from a version prior to 3.12.0.

This release requires Erlang 25 and supports Erlang versions up to 26.2.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.12.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases.

Users upgrading from 3.11.x (or older releases) on Erlang 25 to 3.12.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the v3.12.0 release notes first.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Core Broker

Bug Fixes

  • Environments with a lot of quorum queues could experience a large Erlang process build-up. The build-up was temporary but with a sufficiently large number of quorum queues it could last until the next round of periodic operations, making it permanent and depriving the node of CPU resources.

    GitHub issue: #10242

  • RabbitMQ core failed to propagate more authentication and authorization context, for example, MQTT client ID in case of MQTT connections, to authN and authZ backends. This was not intentional.

    GitHub issue: #10230

  • Nodes now takes more precaution about persisting feature flag state (specifically the effects of in-flight changes) during node shutdown.

    GitHub issue: #10279

Enhancements

  • Simplified some type specs.

    Contributed by @ariel-anieli.

    GitHub issue: #10228

Stream Plugin

MQTT Plugin

Bug Fixes

  • Recovering connections from QoS 0 consumers (subscribers) could fail if they were previously connected to a failed node.

    GitHub issue: #10252

CLI Tools

Bug Fixes

  • Since #10131 (shipped in 3.12.11), some CLI commands in certain scenarios could fail to accept input via standard output.

    GitHub issues: #10270, #10258

AWS Peer Discovery Plugin

Enhancements

  • Updated some type specs.

    Contributed by @ariel-anieli.

    GitHub issue: #10226

Dependency Upgrades

None in this release.

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.12.12.tar.xz instead of the source tarball produced by GitHub.

v3.13.0-rc.4

4 months ago

RabbitMQ 3.13.0-rc.4

RabbitMQ 3.13.0-rc.4 is a candidate of a new feature release.

Highlights

This release includes several new features and optimizations.

The user-facing areas that have seen the biggest improvements in this release are

See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.

Release Artifacts

RabbitMQ preview releases are distributed via GitHub.

Community Docker image is another installation option for previews. It is updated with a delay (usually a few days).

Erlang/OTP Compatibility Notes

This release requires Erlang 26.0 or later.

Provisioning Latest Erlang Releases explains what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.

Upgrading to 3.13

Documentation guides on upgrades

See the Upgrading guide for documentation on upgrades and RabbitMQ change log for release notes of other releases.

Note that since 3.12.0 requires all feature flags to be enabled before upgrading, there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.

Required Feature Flags

This release does not graduate any feature flags.

However, all users are highly encouraged to enable all feature flags before upgrading to this release from 3.12.x.

Mixed version cluster compatibility

RabbitMQ 3.13.0 nodes can run alongside 3.12.x nodes. 3.13.x-specific features can only be made available when all nodes in the cluster upgrade to 3.13.0 or a later patch release in the new series.

While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below. Once all nodes are upgraded to 3.13.0, these irregularities will go away.

Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).

Compatibility Notes

This release includes a few potentially breaking changes&

Minimum Supported Erlang Version

Starting with this release, RabbitMQ requires Erlang 26.0 or later versions. Nodes will fail to start on older Erlang releases.

Client Library Compatibility

Client libraries that were compatible with RabbitMQ 3.12.x will be compatible with 3.13.0. RabbitMQ Stream Protocol clients must be upgraded to use the stream filtering feature introduced in this release.

Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia

Khepri has an important difference from Mnesia when it comes to schema modifications such as queue or stream declarations, or binding declarations. These changes won't be noticeable with many workloads but can affect some, in particular, certain integration tests.

Consider two scenarios, A and B.

Scenario A

There is only one client. The client performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It publishes a message M to the exchange X
  4. It expects the message to be routed to queue Q
  5. It consumes the message

In this scenario, there should be no observable difference in behavior. Client's expectations will be met.

Scenario B

There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host. Node R2 has no client connections.

Client One performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It gets a queue declaration confirmation back
  4. It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
  5. Client Two publishes a message M to X
  6. Clients One and Two expect the message to be routed to Q

In this scenario, on step three Mnesia would return when all cluster nodes have committed an update. Khepri, however, will return when a majority of nodes, including the node handling Client One's operations, have returned.

This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3 in the above example is not guaranteed not be routed.

Once all schema changes propagate to node R3, Client Two's subsequent publishes on node R3 will be guaranteed to be routed.

This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes can be considered a succeess.

Workaround Strategies

To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas) queries of bindings when routing messages but that would have a significant impact on throughput of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).

Applications that rely on multiple connections that depend on a shared topology have several coping strategies.

If an application uses two or more connections to different nodes, it can declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with other operations.

Applications that rely on dynamic topologies can switch to use a "static" set of exchanges and bindings.

Application components that do not need to use a shared topology can each configure its own queues/streams/bindings.

Test suites that use multiple connections to different nodes can choose to use just one connection or connect to the same node, or inject a pause, or await a certain condition that indicates that the topology is in place.

Management Plugin and HTTP API

GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in 25% in traffic saving.

MQTT Plugin

mqtt.subscription_ttl (in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds (in seconds). A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set. For example, if you set mqtt.subscription_ttl = 3600000 (1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600 (1 hour) in 3.13.

rabbitmqctl node_health_check is Now a No-Op

rabbitmqctl node_health_check has been deprecated for over three years and is now an no-op (does nothing).

See the Health Checks section in the monitoring guide to find out what modern alternatives are available.

openSUSE Leap Package is not Provided

An openSUSE Leap package will not be provided with this release of RabbitMQ.

This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory but the package depends on glibc 2.34, and all currently available openSUSE Leap releases (up to 15.5) ship with 2.31 at most.

Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26 package becomes publicly available.

Getting Help

Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or on our community Discord.

Changes Worth Mentioning

Release notes are kept under rabbitmq-server/release-notes.

Core Server

Enhancements

  • Khepri now can be used as an alternative schema data store in RabbitMQ, by enabling a feature flag:

    rabbitmqctl enable_feature_flag khepri_db
    

    In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store that will predictably recover from network partitions and node failures, the same way quorum queues and streams already do. At the same time, this means that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.

    Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.

    Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.

    GitHub issue: #7206

  • Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core: a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state, and other purposes.

    AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container structure internally.

    This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ, and that most cross-protocol translation rough edges can be smoothened.

    GitHub issue: #5077

  • Target quorum queue replica state is now continuously reconciled.

    When the number of online replicas of a quorum queue goes below (or above) its target, new replicas will be automatically placed if enough cluster nodes are available. This is a more automatic version of how quorum queue replicas have originally been grown.

    For automatic shrinking of queue replicas, the user must opt in.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #8218

  • Reduced memory footprint, improved memory use predictability and throughput of classic queues (version 2, or CQv2). This particularly benefits classic queues with longer backlogs.

    Classic queue v2 (CQv2) storage implementation is now the default. It is possible to switch the default back to CQv1 using rabbitmq.conf:

    # uses CQv1 by default
    classic_queue.default_version = 1
    

    Individual queues can be declared by passing x-queue-version argument and/or through a queue-version policy.

    GitHub issue: #8308

  • Revisited peer discovery implementation that further reduces the probability of two or more sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.

    GitHub issue: #9797

  • Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.

    GitHub issue: #6090, #8507

  • A subsystem for marking features as deprecated.

    GitHub issue: #7390

  • Plugins now can register custom queue types. This means that a plugin now can provide a custom queue type.

    Contributed by @luos (Erlang Solutions).

    GitHub issues: #8834, #8927

Bug Fixes

This release includes all bug fixes shipped in the 3.12.x series.

  • Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.

    GitHub issue: #8477

  • Feature flag discovery operations will now be retried multiple times in case of network failures.

    GitHub issue: #8491

  • The state of node maintenance status across the cluster is now replicated. It previously was accessible to all nodes but not replicated.

    GitHub issue: #9005

Management Plugin

Enhancements

  • New API endpoint, GET /api/stream/{vhost}/{name}/tracking, can be used to track publisher and consumer offsets in a stream.

    GitHub issue: #9642

  • Several rarely used queue metrics were removed to reduce inter-node data transfers and CPU burn during API response generation. The effects will be particularly pronounced for the GET /api/queues endpoint used without filtering or pagination, which can produce enormously large responses.

    A couple of relevant queue metrics or state fields were lifted to the top level.

    This is a potentially breaking change.

    Note that Prometheus is the recommended option for monitoring, not the management plugin's HTTP API.

    GitHub issues: #9437, #9578, #9633

Stream Plugin

Enhancements

  • Support for (consumer) stream filtering.

    This allows consumers that are only interested in a subset of data in a stream to receive less data. Note that false positives are possible, so this feature should be accompanied by client library or application-level filtering.

    GitHub issue: #8207

MQTT Plugin

Enhancements

  • Support for MQTTv5 (with limitations).

    GitHub issues: #7263, #8681

  • Negative message acknowledgements are now propagated to MQTTv5 clients.

    GitHub issue: #9034

  • Potential incompatibility: mqtt.subscription_ttl configuration was replaced with mqtt.max_session_expiry_interval_seconds that targets MQTTv5.

    GitHub issue: #8846

AMQP 1.0 Plugin

Bug Fixes

  • During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as x-correlation-id (instead of x-correlation) for values longer than 255 bytes.

    This is a potentially breaking change.

    GitHub issue: #8680

Dependency Changes

  • ra was upgraded to 2.7.1
  • osiris was updated to 1.7.2

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz instead of the source tarball produced by GitHub.

v3.12.11

4 months ago

RabbitMQ 3.12.11 is a maintenance release in the 3.12.x release series.

Please refer to the upgrade section from the 3.12.0 release notes if upgrading from a version prior to 3.12.0.

This release requires Erlang 25 and supports Erlang versions up to 26.1.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.12.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases.

Users upgrading from 3.11.x (or older releases) on Erlang 25 to 3.12.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the v3.12.0 release notes first.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Core Broker

Bug Fixes

  • Quorum queue declared when one of cluster nodes was down could trigger connection exceptions.

    GitHub issue: #10007

  • Avoids a rare exception that could stop TCP socket writes on a client connection.

    GitHub issues: #9991, #9803

  • queue_deleted and queue_created internal events now include queue type as a module name, and not an inconsistent (with the other queue and stream types) value classic.

    GitHub issue: #10142

Enhancements

  • Definition files that are virtual host-specific cannot be imported on boot. Such files will now be detected early and the import process will terminate after logging a more informative message.

    Previously the import process would run into an obscure exception.

    GitHub issues: #10068, #10085

AMQP 1.0 Plugin

Bug Fixes

  • Several AMQP 1.0 application properties are now more correctly converted to AMQP 0-9-1 headers by cross-protocol Shovels.

    The priority property now populates an AMQP 1.0 header with the same name, per AMQP 1.0 spec.

    This is a potentially breaking change.

    Contributed by @luos (Erlang Solutions).

    GitHub issues: #10037, #7508

Prometheus Plugin

Enhancements

  • Metric label values now escape certain non-ASCII characters.

    Contributed by @gomoripeti (CloudAMQP).

    GitHub issue: #10196

MQTT Plugin

Bug Fixes

  • Avoids an exception when an MQTT client that used a QoS 0 subscription reconnects and its original connection node is down.

    GitHub issue: #10205

  • Avoids an exception when an MQTT client connection was force-closed via the HTTP API.

    GitHub issue: #10140

CLI Tools

Bug Fixes

  • Certain CLI commands could not be run in a shell script loop, unless the script explicitly redirected standard input.

    GitHub issue: #10131

Enhancements

  • rabbitmq-diagnostics cluster_status now responds much quicker when a cluster node has gone down, were shut down, or otherwise has become unreachable by the rest of the cluster.

    GitHub issue: #10126

Management Plugin

Bug Fixes

  • Reverted a change to DELETE /api/queues/{vhost}/{name} that allowed removal of exclusive queues and introduced unexpected side effects.

    GitHub issue: #10178

  • DELETE /api/policies/{vhost}/{policy} returned a 500 response instead of a 404 one when target virtual host did not exist.

    GitHub issue: #9983

  • Avoid log noise when an HTTP API request is issued against a booting or very freshly booted node.

    Contributed by @gomoripeti (CloudAMQP).

    GitHub issue: #10187

Enhancements

  • HTTP API endpoints that involves contacting multiple nodes now respond much quicker when a cluster node has gone down, were shut down, or otherwise has become unreachable by the rest of the cluster

    GitHub issue: #10123

  • Definition exported for just one virtual host cannot be imported at node boot time. Now such files are detected early with a clear log message and immediate node boot process termination.

    GitHub issues: #10068, #10072

AWS Peer Discovery Plugin

Enhancements

  • Type spec and test corrections.

    Contributed by @illotum (AWS).

    GitHub issue: #10134

Dependency Upgrades

  • osiris was updated to 1.7.2

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.12.11.tar.xz instead of the source tarball produced by GitHub.

v3.11.28

4 months ago

RabbitMQ 3.11.28 is a maintenance release in the 3.11.x release series. This release series goes out of community support on Dec 31, 2023.

Please refer to the upgrade section from v3.11.0 release notes if upgrading from a version prior to 3.11.0.

This release requires Erlang 25 and supports Erlang versions up to 25.3.x. RabbitMQ and Erlang/OTP Compatibility Matrix has more details on Erlang version requirements for RabbitMQ.

Minimum Supported Erlang Version

As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases.

Erlang 25 as our new baseline means much improved performance on ARM64 architectures, profiling with flame graphs across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users.

Changes Worth Mentioning

Release notes can be found on GitHub at rabbitmq-server/release-notes.

Prometheus Plugin

Enhancements

  • Metric label values now escape certain non-ASCII characters.

    Contributed by @gomoripeti (CloudAMQP).

    GitHub issue: #10196

Management Plugin

Bug Fixes

  • Reverted a change to DELETE /api/queues/{vhost}/{name} that allowed removal of exclusive queues and introduced unexpected side effects.

    GitHub issue: #10189

  • Avoid log noise when an HTTP API request is issued against a booting or very freshly booted node.

    Contributed by @gomoripeti (CloudAMQP).

    GitHub issue: #10183

AWS Peer Discovery Plugin

Enhancements

  • Type spec and test corrections.

    Contributed by @illotum (AWS).

    GitHub issue: #10133

Dependency Upgrades

None in this release.

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.11.28.tar.xz instead of the source tarball produced by GitHub.

v3.13.0-rc.3

4 months ago

RabbitMQ 3.13.0-rc.3

RabbitMQ 3.13.0-rc.3 is a candidate of a new feature release.

Highlights

This release includes several new features and optimizations.

The user-facing areas that have seen the biggest improvements in this release are

See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.

Release Artifacts

RabbitMQ preview releases are distributed via GitHub.

Community Docker image is another installation option for previews. It is updated with a delay (usually a few days).

Erlang/OTP Compatibility Notes

This release requires Erlang 26.0 or later.

Provisioning Latest Erlang Releases explains what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.

Upgrading to 3.13

Documentation guides on upgrades

See the Upgrading guide for documentation on upgrades and RabbitMQ change log for release notes of other releases.

Note that since 3.12.0 requires all feature flags to be enabled before upgrading, there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.

Required Feature Flags

This release does not graduate any feature flags.

However, all users are highly encouraged to enable all feature flags before upgrading to this release from 3.12.x.

Mixed version cluster compatibility

RabbitMQ 3.13.0 nodes can run alongside 3.12.x nodes. 3.13.x-specific features can only be made available when all nodes in the cluster upgrade to 3.13.0 or a later patch release in the new series.

While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below. Once all nodes are upgraded to 3.13.0, these irregularities will go away.

Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).

Compatibility Notes

This release includes a few potentially breaking changes&

Minimum Supported Erlang Version

Starting with this release, RabbitMQ requires Erlang 26.0 or later versions. Nodes will fail to start on older Erlang releases.

Client Library Compatibility

Client libraries that were compatible with RabbitMQ 3.12.x will be compatible with 3.13.0. RabbitMQ Stream Protocol clients must be upgraded to use the stream filtering feature introduced in this release.

Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia

Khepri has an important difference from Mnesia when it comes to schema modifications such as queue or stream declarations, or binding declarations. These changes won't be noticeable with many workloads but can affect some, in particular, certain integration tests.

Consider two scenarios, A and B.

Scenario A

There is only one client. The client performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It publishes a message M to the exchange X
  4. It expects the message to be routed to queue Q
  5. It consumes the message

In this scenario, there should be no observable difference in behavior. Client's expectations will be met.

Scenario B

There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host. Node R2 has no client connections.

Client One performs the following steps:

  1. It declares a queue Q
  2. It binds Q to an exchange X
  3. It gets a queue declaration confirmation back
  4. It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
  5. Client Two publishes a message M to X
  6. Clients One and Two expect the message to be routed to Q

In this scenario, on step three Mnesia would return when all cluster nodes have committed an update. Khepri, however, will return when a majority of nodes, including the node handling Client One's operations, have returned.

This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3 in the above example is not guaranteed not be routed.

Once all schema changes propagate to node R3, Client Two's subsequent publishes on node R3 will be guaranteed to be routed.

This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes can be considered a succeess.

Workaround Strategies

To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas) queries of bindings when routing messages but that would have a significant impact on throughput of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).

Applications that rely on multiple connections that depend on a shared topology have several coping strategies.

If an application uses two or more connections to different nodes, it can declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with other operations.

Applications that rely on dynamic topologies can switch to use a "static" set of exchanges and bindings.

Application components that do not need to use a shared topology can each configure its own queues/streams/bindings.

Test suites that use multiple connections to different nodes can choose to use just one connection or connect to the same node, or inject a pause, or await a certain condition that indicates that the topology is in place.

Management Plugin and HTTP API

GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in 25% in traffic saving.

MQTT Plugin

mqtt.subscription_ttl (in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds (in seconds). A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set. For example, if you set mqtt.subscription_ttl = 3600000 (1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600 (1 hour) in 3.13.

rabbitmqctl node_health_check is Now a No-Op

rabbitmqctl node_health_check has been deprecated for over three years and is now an no-op (does nothing).

See the Health Checks section in the monitoring guide to find out what modern alternatives are available.

openSUSE Leap Package is not Provided

An openSUSE Leap package will not be provided with this release of RabbitMQ.

This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory but the package depends on glibc 2.34, and all currently available openSUSE Leap releases (up to 15.5) ship with 2.31 at most.

Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26 package becomes publicly available.

Getting Help

Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or on our community Discord.

Changes Worth Mentioning

Release notes are kept under rabbitmq-server/release-notes.

Core Server

Enhancements

  • Khepri now can be used as an alternative schema data store in RabbitMQ, by enabling a feature flag:

    rabbitmqctl enable_feature_flag khepri_db
    

    In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store that will predictably recover from network partitions and node failures, the same way quorum queues and streams already do. At the same time, this means that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.

    Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.

    Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.

    GitHub issue: #7206

  • Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core: a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state, and other purposes.

    AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container structure internally.

    This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ, and that most cross-protocol translation rough edges can be smoothened.

    GitHub issue: #5077

  • Target quorum queue replica state is now continuously reconciled.

    When the number of online replicas of a quorum queue goes below (or above) its target, new replicas will be automatically placed if enough cluster nodes are available. This is a more automatic version of how quorum queue replicas have originally been grown.

    For automatic shrinking of queue replicas, the user must opt in.

    Contributed by @SimonUnge (AWS).

    GitHub issue: #8218

  • Reduced memory footprint, improved memory use predictability and throughput of classic queues (version 2, or CQv2). This particularly benefits classic queues with longer backlogs.

    Classic queue v2 (CQv2) storage implementation is now the default. It is possible to switch the default back to CQv1 using rabbitmq.conf:

    # uses CQv1 by default
    classic_queue.default_version = 1
    

    Individual queues can be declared by passing x-queue-version argument and/or through a queue-version policy.

    GitHub issue: #8308

  • Revisited peer discovery implementation that further reduces the probability of two or more sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.

    GitHub issue: #9797

  • Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.

    GitHub issue: #6090, #8507

  • A subsystem for marking features as deprecated.

    GitHub issue: #7390

  • Plugins now can register custom queue types. This means that a plugin now can provide a custom queue type.

    Contributed by @luos (Erlang Solutions).

    GitHub issues: #8834, #8927

Bug Fixes

This release includes all bug fixes shipped in the 3.12.x series.

  • Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.

    GitHub issue: #8477

  • Feature flag discovery operations will now be retried multiple times in case of network failures.

    GitHub issue: #8491

  • The state of node maintenance status across the cluster is now replicated. It previously was accessible to all nodes but not replicated.

    GitHub issue: #9005

Management Plugin

Enhancements

  • New API endpoint, GET /api/stream/{vhost}/{name}/tracking, can be used to track publisher and consumer offsets in a stream.

    GitHub issue: #9642

  • Several rarely used queue metrics were removed to reduce inter-node data transfers and CPU burn during API response generation. The effects will be particularly pronounced for the GET /api/queues endpoint used without filtering or pagination, which can produce enormously large responses.

    A couple of relevant queue metrics or state fields were lifted to the top level.

    This is a potentially breaking change.

    Note that Prometheus is the recommended option for monitoring, not the management plugin's HTTP API.

    GitHub issues: #9437, #9578, #9633

Stream Plugin

Enhancements

  • Support for (consumer) stream filtering.

    This allows consumers that are only interested in a subset of data in a stream to receive less data. Note that false positives are possible, so this feature should be accompanied by client library or application-level filtering.

    GitHub issue: #8207

MQTT Plugin

Enhancements

  • Support for MQTTv5 (with limitations).

    GitHub issues: #7263, #8681

  • Negative message acknowledgements are now propagated to MQTTv5 clients.

    GitHub issue: #9034

  • Potential incompatibility: mqtt.subscription_ttl configuration was replaced with mqtt.max_session_expiry_interval_seconds that targets MQTTv5.

    GitHub issue: #8846

AMQP 1.0 Plugin

Bug Fixes

  • During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as x-correlation-id (instead of x-correlation) for values longer than 255 bytes.

    This is a potentially breaking change.

    GitHub issue: #8680

Dependency Changes

  • ra was upgraded to 2.7.0
  • osiris was upgraded to 1.6.9

Source Code Archives

To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz instead of the source tarball produced by GitHub.