Dapr Versions Save

Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.

v1.13.0-rc.4

3 months ago

This is the release candidate 1.13.0-rc.4

v1.13.0-rc.3

3 months ago

This is the release candidate 1.13.0-rc.3

v1.12.5

3 months ago

Dapr 1.12.5

Azure Event Hubs bindings and pubsub silently fail to recover subscriptions during event processor failures

Problem

The Azure Event Hubs bindings and pubsub components may fail in such a way that for one or multiple subscribed topics no further events will be received.

Impact

Impacts users running Dapr 1.12.4 or earlier that use Azure EventHubs bindings or pubsub components. The dapr sidecar error message Error from event processor indicates that this problem has been encountered.

Root cause

Under certain failure scenarios (including networking interruption) the Event Hubs event processor for a particular subscription topic can return an error. When this occurs the error is logged once, existing partition clients will terminate with errors and no further partition clients will be allocated. At this point the subscription for this topic effectively ends with no attempt to restart the subscription. This however does not impact other sidecar functionality and the sidecar continues to report healthy status.

Solution

Added error handling for the topic event processor host to restart the entire subscription loop for this topic 5 seconds after an error is encountered.

Service invocation calls return duplicate headers on non-200 responses

Problem

When using the Service Invocation API, HTTP calls that returned non-200 responses from target apps would contain duplicate headers.

Impact

Impacts users running Dapr 1.12.0 - 1.12.4 and using Service Invocation with HTTP.

Root cause

Dapr did not check if headers were already added in non-successful responses.

Solution

Dapr was changed to not include headers if they already exist when sending the response back to the calling app.

Pub-sub messages containing a content-length metadata from the broker would fail a consuming gRPC app

Problem

Pub/Sub brokers that returned a content-length metadata might cause an app's gRPC server to reject the request because the size of the data returned from the broker is different than the size of the request Dapr is sending to the app.

Impact

Impacts users running Dapr 1.x using Pub/Sub brokers that support delivering custom headers, in combination with publishers that included a content-length metadata.

Root cause

Dapr did not remove the content-length header, if such existed, before sending the request to gRPC-enabled apps.

Solution

In accordance to gRPC recommendations, the content-length metadata was removed from the message Dapr is sending to the app.

Automatic client-side state encryption would panic on state decryption in self-hosted mode

Problem

When using client-side state encryoption in self-hosted mode, decryption operations would cause Dapr to panic and exit.

Impact

Impacts users running Dapr 1.12.x using client-side state encryption in self-hosted mode.

Root cause

A regression in Dapr caused the metadata information containing the name of the decryption key to become empty.

Solution

The code causing the regression was reverted and the decryption key name became available again, allowing for decryption of data to succeed.

v1.13.0-rc.2

3 months ago

This is the release candidate 1.13.0-rc.2

v1.13.0-rc.1

3 months ago

This is the release candidate 1.13.0-rc.1

v1.12.4

4 months ago

Dapr 1.12.4

This update includes bug fixes:

Mitigate race condition in placement table during sidecar restarts

Problem

When restarting a significant deployment on Kubernetes (> 20 pods), many pods will be removed from placement table. When the restarts happens concurrently with leadership changes in placement service, the placement table can get into a corrupt state that requires the restart of placement service #7311

Impact

Impacts users running Dapr 1.12.0-1.12.3 that uses actors and deployments with many sidecar instances.

Root cause

Race condition between placement leadership changes and placement table updates due to pods being terminated.

Solution

Mitigate by reducing the chances of leadership changes in placement service and fix a bug that can cause terminated pods to remain in placement table forever.

Fixes in service invocation when target app has multiple replicas on Kubernetes

Problem

When performing service invocation to another app using Dapr (using either HTTP or gRPC), the caller sidecar establishes a gRPC connection with the target sidecar. When the target app is deployed with multiple replicas (scaled horizontally), a connection is established with each replica and requests are load-balanced across all replicas.

In Dapr 1.12.3 and lower running on Kubernetes, due to the way this logic was implemented, if the connection with one of the replicas became idle, it would have caused the connection between the caller sidecar and all replicas to be severed, possibly interrupting other in-flight service invocation calls.

Impact

This issue impacts users running Dapr 1.12.3 and lower, running on Kubernetes, that use service invocation to invoke apps that are scaled horizontally.

This issue does not impact users that are running outside of Kubernetes or who are using other Dapr nameresolution components (including mDNS or Consul).

Root cause

On Kubernetes, the gRPC connection was established with the DNS name of the target Service, and we allowed the gRPC-Go library to perform name resolution with the Kubernetes DNS server and establish a connection with each replica. When the connection with one of the replicas becomes idle, the caller app receives a "GOAWAY" message which is interpreted as a signal to terminate all connections with all replicas of the app.

Solution

We have changed the internals of service invocation, so we always perform DNS resolution in the caller Dapr sidecar, also on Kubernetes. The gRPC-Go library receives an individual IP address to connect to, so "GOAWAY" messages only impact the connection with an individual replica.

v1.12.4-rc.1

4 months ago

This is the release candidate 1.12.4-rc.1

v1.12.3

4 months ago

Dapr 1.12.3

This update includes bug fixes:

Fix timeouts in HTTP service invocation when resiliency policies with timeouts are applied

Problem

In HTTP service invocation, in certain cases when a resiliency policy is applied (for example, one that includes timeouts), requests could be interrupted earlier with a "context deadline exceeded" error.

Impact

Impacts users running Dapr 1.12.0-1.12.2 that use HTTP service invocation and who have resiliency policies applied

Root cause

When resiliency policies with timeouts are applied, due to a bug the incorrect context was used while sending the response to the client, and in certain situations it could have been terminated earlier than the request.

Solution

We fixed the code that handles HTTP service invocation to make sure the timeout is applied to the entire response.

Fix dissemination of placement tables

Problem

Placement nodes acquire a lock, disseminate the tables, and release the lock in parallel. The Placement stream disconnects before the dissemination of tables.

Error invoke actor method: failed to invoke target x after 3 retries

Placement server logs:

level=error msg="Stream is disconnected before member is added

Impact

Impacts users running Dapr 1.12.0-1.12.2

Root cause

Dissemination did not use a background context. The logic to acquire a lock, disseminate the tables, and release the lock were all occurring in parallel.

Solution

Updated the Dapr Placement service to use a background context and follow a 3 step process to acquire a lock, disseminate the tables, and release the lock.

Fix SQL Server state store not working correctly with case-sensitive collations

Problem

When using a database with a case-sensitive collation, the SQL Server state store component did not work correctly. Certain operations, including those relied upon by the actor state store, were failing.

Impact

Impacts users of Dapr 1.11.0-1.12.2

Root cause

A stored procedure referenced a column named with the wrong case. This was causing errors on case-sensitive databases.

Solution

We updated the stored procedure so it does use the correct case for column names.

Fix Kafka Pub/Sub consumer bottlenecks with multiple topics

Problem

When using Dapr with Kafka for Pub/Sub and subscribing to multiple topics, users might see bottlenecks in Kafka message processing and long init times when the Dapr sidecar is starting up.

Impact

Impacts users of Dapr 1.11.0-1.12.2

Root cause

When new topic subscriptions were added, Dapr would close the consumer group and recreate it, leading to partition rebalance across the cluster.

Solution

The Kafka consumer logic was changed to not recreate the consumer group every time a new topic subscription is added.

v1.12.3-rc.2

5 months ago

This is the release candidate 1.12.3-rc.2

v1.12.3-rc.1

5 months ago

This is the release candidate 1.12.3-rc.1