Partisan is a scalable and flexible, TCP-based membership system and distribution layer for the BEAM. It bypasses the use of Distributed Erlang for manual connection management via TCP, and has several pluggable backends for different deployment scenarios.
Partisan is a runtime system that enables greater scalability and reduced latency for distributed actor applications.
membership_strategyinterface for handling messages. Partisan automatically uses this membership strategy for processing incoming and outgoing messages to the system – the application developer only needs to handle internal state transitions and supplying the system with an updated list of members. Partisan automatically sets up required connections, serializes and deserializes messages, performs failure detection, and message forwarding. This makes it possible to implement protocols with very little code; our implementation of the full-mesh membership protocol is 152 LOC.
Erlang/OTP, specifically distributed erlang (a.k.a.
disterl), uses a full-mesh overlay network. This means that in the worst case scenario all nodes are connected-to and communicate-with all other nodes in the system.
Failure detector. These nodes send periodic heartbeat messages to their connected nodes and deem a node "failed" or "unreachable" when it misses a certain number of heartbeat messages i.e.
net_tick_time setting in
Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
Also, Erlang conflates control plane messages with application messages on the same TCP/IP connection and uses a single TCP/IP connection between two nodes, making it liable to the Head-of-line blocking issue. This also leads to congestion and contention that further affects latency.
This model might scale well for datacenter deployments, where low latency can be assumed, but not for geo-distributed deployments, where latency is non-uniform and can show wide variance.
Partisan was also designed to handle failures:
Partisan was designed to increase scalability, reduce latency and improve failure detection for Erlang/BEAM distributed applications.
partisan_pluggable_peer_service_manager: full mesh with TCP-based failure detection. All nodes maintain active connections to all other nodes in the system using one or more TPC connections.
partisan_hyparview_peer_service_manager.: modified implementation of the HyParView protocol, peer-to-peer, designed for high scale, high churn environments. A hybrid partial view membership protocol, with TCP-based failure detection.
partisan_client_server_peer_service_manager.: star topology, where clients communicate with servers, and servers communicate with other servers.
partisan_static_peer_service_manager: static membership, where connections are explicitly made between nodes
net_kernelfor monitoring nodes and remote processes. Notice this currently only works for
Find the documentation for Partisan releases at hex.pm.
Alternatively you can build it yourself locally using
The resulting documentation will be found in the
docs directory, just open the
index.html file with your preferred Web Browser.