Pinset orchestration for IPFS
IPFS Cluster v1.1.0 is a maintenance release that comes with a number of improvements in libp2p connection and resource management. We are bumping the minor release version to bring the attention to the slight behavioral changes included in this release.
In order to improve how clusters with a very large number of peers behave, we have changed the previous logic which made every peer reconnect constantly to 3 specific peers (usually the "trusted peers" in collaborative clusters). For obvious reasons, this caused bottlenecks when the clusters grew into the thousands of peers. Swarm connections should now grow more organically and we only re-bootstrap when they fall below expectable levels.
Anothe relevant change is the exposure of the libp2p Resource Manager settings, and the new defaults, which limit libp2p usages to 25% of the system's total memory and 50% of the process' available file descriptors. The limits can be adjusted as explained below. The new defaults, along with other internal details controlling the initialization of the resource manager are likely more restrictive than the defaults used in previous versions. That means that memory-constrained systems may start seeing resource-manager errors where there were none before. The solution is to increase the limits. The limits are conservative as Kubo is the major resource user at the end of the day.
We have also updated to the latest Pebble release. This should not cause any
problems for users that already bumped major_format_version
when upgrading
to v1.0.8, otherwise we recommend setting it to 16
per the warning printed
on daemon's start.
There are no breaking changes on this release.
[]
instead of null
| ipfs/ipfs-cluster#2051
A resource_manager
setting has been added to the main cluster
configuration section:
cluster: {
...
"resource_manager": {
"enabled": true,
"memory_limit_bytes": 0,
"file_descriptors_limit": 0
},
when not present, the defaults will be as shown above. Using negative values will error.
The new setting controls the working limits for the libp2p Resource
Manager. 0
means "based on system's resources":
memory_limit_bytes
defaults to 25% of the system's total memory when set
to 0
, with a minimum of 1GiB.file_descriptors_limit
defaults to 50% of the process' file descriptor limit when set to 0
.These limits can be set manually, or the resource manager can be fully
disabled by toggling the enabled
setting.
When the limits are reached, libp2p will print warnings and errors as connections and libp2p streams are dropped. Note that the limits only affect libp2p resources and not the total memory usage of the IPFS Cluster daemon.
No changes.
No changes.
No changes.
No relevant changes.
Nothing.
IPFS Cluster v1.0.8 is a maintenance release.
This release updates dependencies (latest boxo and libp2p) and should bring a couple of Pebble-related improvements:
MajorFormatVersion
is higher than
what is used in the configuration. Users should increase their major_format_version
to maintain forward-compatibility with future versions of Pebble.
Additionally, some bugs have been fixed and a couple of useful features added, as mentioned below.
There are no breaking changes on this release.
Dockerfile-bundle
file has been removed (unmaintained) | ipfs/ipfs-cluster#1986
Two new options have been added to forcefully control the cluster peer libp2p host address announcements: cluster.announce_multiaddress
and cluster.no_announce_multiaddress
. Both take a slice of multiaddresses.
No changes.
No changes.
No changes.
No relevant changes.
Nothing.
IPFS Cluster v1.0.7 is a maintenance release.
This release updates dependencies and switches to the Boxo library suite with the latest libp2p release.
See the notes below for a list of changes and bug fixes.
There are no breaking changes on this release.
A new option cluster.pin_only_on_untrusted_peers
has been added, opposite to the pin_only_on_trusted_peers
that already existed. Defaults to false
. Both options cannot be true
. When enabled, only "untrusted" peers are considered for pin allocations.
A new /health
endpoint has been added, returns 204 (No Content) and no
body. It can be used to monitor that the service is running.
A new /health
endpoint has been added, returns 204 (No Content) and no
body. It can be used to monitor that the service is running.
Calling /api/v0/pin/ls
on the proxy api now adds a final new line at the end
of the response. This should align with what Kubo does.
No relevant changes.
ipfs-cluster-service
now sends a notification to systemd when it becomes
"ready" (that is, after all initialization is completed). This means systemd
service files for ipfs-cluster-service
can use Type=notify
.
The official docker images are now built with support for linux/amd64,
linux/arm/v7 and linux/arm64/v8 architectures. We have also switched to Alpine
Linux as base image (instead of Busybox). Binaries are now built with
CGO_ENABLED=0
.
IPFS Cluster v1.0.6 is a maintenance release with some small fixes. The main
change in this release is that pebble
becomes the default datastore backend,
as we mentioned in the last release.
Pebble is the datastore backend used by CockroachDB and is inspired in RocksDB. Upon testing, Pebble has demonstrated good performance and optimal disk usage. Pebble incorporates modern datastore-backend features such as compression, caching and bloom filters. Pebble is actively maintained by the CockroachDB team and therefore seems like the best default choice for IPFS Cluster.
Badger3, a very good alternative choice, becomes the new default for platforms not supported by Pebble (mainly 32bit architectures). Badger and LevelDB are still supported, but we heavily discourage their usage for new Cluster peers.
There are no breaking changes on this release.
ipfs-cluster-ctl add --no-pin
flag | ipfs/ipfs-cluster#1852
The pebble
section of the configuration has some additional options and new, adjusted defaults:
pebble
: "pebble": {
"pebble_options": {
"cache_size_bytes": 1073741824,
"bytes_per_sync": 1048576,
"disable_wal": false,
"flush_delay_delete_range": 0,
"flush_delay_range_key": 0,
"flush_split_bytes": 4194304,
"format_major_version": 1,
"l0_compaction_file_threshold": 750,
"l0_compaction_threshold": 4,
"l0_stop_writes_threshold": 12,
"l_base_max_bytes": 134217728,
"levels": [
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 4194304
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 8388608
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 16777216
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 33554432
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 67108864
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 134217728
},
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 2,
"filter_type": 0,
"filter_policy": 10,
"index_block_size": 4096,
"target_file_size": 268435456
}
],
"max_open_files": 1000,
"mem_table_size": 67108864,
"mem_table_stop_writes_threshold": 20,
"read_only": false,
"wal_bytes_per_sync": 0
}
}
No changes.
No changes.
No changes.
No relevant changes.
The --datastore
flag to ipfs-cluster-service init
now defaults to pebble
in most platforms, and to badger3
in those where Pebble is not supported
(arm, 386).
IPFS Cluster v1.0.5 is a maintenance release with one main feature: support
for badger3
and pebble
datastores.
Additionally, this release fixes compatibility with Kubo v0.18.0 and addresses the crashes related to libp2p autorelay that affected the previous version.
pebble
and badger3
are much newer backends that the already available
Badger and LevelDB. They are faster, use significantly less disk-space and
support additional options like compression. We have set pebble
as the
default datastore used by the official Docker container, and we will be likely
making it the final default choice for new installations. In the meantime, we
encourage the community to try them out and provide feedback.
There are no breaking changes on this release.
pebble
and badger3
datastores | ipfs/ipfs-cluster#1809
pebble
by default | ipfs/ipfs-cluster#1842
The datastore
section of the configuration now supports the two new datastore backends:
badger3
: "badger3": {
"gc_discard_ratio": 0.2,
"gc_interval": "15m0s",
"gc_sleep": "10s",
"badger_options": {
"dir": "",
"value_dir": "",
"sync_writes": false,
"num_versions_to_keep": 1,
"read_only": false,
"compression": 0,
"in_memory": false,
"metrics_enabled": true,
"num_goroutines": 8,
"mem_table_size": 67108864,
"base_table_size": 2097152,
"base_level_size": 10485760,
"level_size_multiplier": 10,
"table_size_multiplier": 2,
"max_levels": 7,
"v_log_percentile": 0,
"value_threshold": 100,
"num_memtables": 5,
"block_size": 4096,
"bloom_false_positive": 0.01,
"block_cache_size": 0,
"index_cache_size": 0,
"num_level_zero_tables": 5,
"num_level_zero_tables_stall": 15,
"value_log_file_size": 1073741823,
"value_log_max_entries": 1000000,
"num_compactors": 4,
"compact_l_0_on_close": false,
"lmax_compaction": false,
"zstd_compression_level": 1,
"verify_value_checksum": false,
"checksum_verification_mode": 0,
"detect_conflicts": false,
"namespace_offset": -1
}
}
pebble
: "pebble": {
"pebble_options": {
"bytes_per_sync": 524288,
"disable_wal": false,
"flush_delay_delete_range": 0,
"flush_delay_range_key": 0,
"flush_split_bytes": 4194304,
"format_major_version": 1,
"l0_compaction_file_threshold": 500,
"l0_compaction_threshold": 4,
"l0_stop_writes_threshold": 12,
"l_base_max_bytes": 67108864,
"levels": [
{
"block_restart_interval": 16,
"block_size": 4096,
"block_size_threshold": 90,
"compression": 1,
"filter_type": 0,
"index_block_size": 4096,
"target_file_size": 2097152
}
],
"max_open_files": 1000,
"mem_table_size": 4194304,
"mem_table_stop_writes_threshold": 2,
"read_only": false,
"wal_bytes_per_sync": 0
}
}
In order to choose the backend during initialization, use the --datastore
flag in ipfs-cluster-service init --datastore <backend>
.
No changes.
No changes.
No changes.
No relevant changes.
Docker containers now use pebble
as the default datastore backend.
Nothing.
IPFS Cluster v1.0.4 is a maintenance release addressing a couple of bugs and adding more "state crdt" commands.
One of the bugs has potential to cause a panic, while a second one can potentially dead-lock pinning operations and hang new pinning requests. We recommend all users to upgrade as soon as possible.
There are no breaking changes on this release.
No other changes.
There are no configuration changes for this release.
No changes.
No changes.
No changes.
No relevant changes.
Nothing.
IPFS Cluster v1.0.3 is a maintenance release addressing some bugs and bringing some improvements to error handling behavior, as well as a couple of small features.
This release upgrades to the latest libp2p release (v0.22.0).
There are no breaking changes on this release.
/block/put
and /dag/put
requests | ipfs/ipfs-cluster#1738 | ipfs/ipfs-cluster#1756
There are no configuration changes for this release.
No changes.
No changes.
The IPFS Proxy now intercepts /block/put
and /dag/put
requests. This happens as follows:
?pin
query parameter always set to false
.?pin=true
was set, a cluster pin is triggered for every block and dag
object uploaded (reminder that these endpoints accept multipart uploads).No relevant changes.
Note that more than 10 failed requests to IPFS will now result in a rate-limit of 1req/s for any request to IPFS. This may cause things to queue up instead hammering the ipfs daemon with requets that fail. The rate limit is removed as soon as one request succeeds.
Also note that now Cluster peers that are started will not become fully operable until IPFS has been detected to be available: no metrics will be sent, no recover operations will be run etc. essentially the Cluster peer will wait for IPFS to be available before starting to do things that need IPFS to be available, rather than doing them right away and have failures.
IPFS Cluster v1.0.2 is a maintenance release with bug fixes and another iteration of the experimental support for the Pinning Services API that was introduced on v1.0.0, including Bearer token authorization support for both the REST and the Pinning Service APIs.
This release includes a security fix in the go-car library. The security issue allows an attacker to crash a cluster peer or cause excessive memory usage when uploading CAR files via the REST API (POST /add?format=car endpoint).
This also the first release after moving the project from the "ipfs" to the the "ipfs-cluster" Github organization, which means the project Go modules have new paths (everything is redirected though). The Docker builds remain inside the "ipfs" namespace (i.e. docker pull ipfs/ipfs-cluster).
IPFS Cluster is also ready to work with go-ipfs v0.13.0+. We recommend to upgrade.
There are no configuration changes for this release.
The REST API has a new POST /token endpoint, which returns a JSON object with a JWT token (when correctly authenticated).
This token can be used to authenticate using Authorization: Bearer
The token is tied and verified against a basic authentication user and password, as configured in the basic_auth_credentials field.
At the moment we do not support revocation, expiration and other token options.
No changes to IPFS Proxy API.
All prebuild binaries are available on dist.ipfs.io
IPFS Cluster v1.0.1 is a maintenance release ironing out some issues and bringing a couple of improvements around observability of cluster performance:
ipfscluster_pins
metric and added a few new ones that
help determine how fast the cluster can pin and add blocks.Please read below for a list of changes and things to watch out for.
Peers running IPFS Cluster v1.0.0 will not be able to read the pin's user-set metadata fields for pins submitted by peers in later versions, since metadata is now stored on a different protobuf field. If this is an issue, all peers in the cluster should upgrade.
ipfs-cluster-ctl pin ls
hangs | ipfs/ipfs-cluster#1663
ipfscluster_pins
metric issues bad values | ipfs/ipfs-cluster#1645
There is a new pinqueue
configuration object inside the informer
section on newly initialized configurations:
"informer": {
...
"pinqueue": {
"metric_ttl": "30s",
"weight_bucket_size": 100000
},
...
This enables the Pinqueue Informer, which broadcasts metrics containing the size of the pinqueue with the metric weight divided by weight_bucket_size
. The new metric is not used for allocations by default, and it needs to be manually added to the allocate_by
option in the allocator, usually like:
"allocator": {
"balanced": {
"allocate_by": [
"tag:group",
"pinqueue",
"freespace"
]
}
No changes to REST API.
No changes to IPFS Proxy API.
No relevant changes to Go APIs, other than the PinTracker interface now requiring a PinQueueSize
method.
The following metrics are now available in the Prometheus endpoint when enabled:
ipfscluster_pins_ipfs_pins gauge
ipfscluster_pins_pin_add counter
ipfscluster_pins_pin_add_errors counter
ipfscluster_blocks_put counter
ipfscluster_blocks_added_size counter
ipfscluster_blocks_added counter
ipfscluster_blocks_put_error counter
The following metrics were converted from counter
to gauge
:
ipfscluster_pins_pin_queued
ipfscluster_pins_pinning
ipfscluster_pins_pin_error
Peers that are reporting freespace
as 0 and which use this metric to
allocate pins, will no longer be available for allocations (they stop
broadcasting this metric). This means setting StorageMax
on IPFS to 0
effectively prevents any pins from being explicitly allocated to a peer
(that is, when replication_factor != everywhere).
IPFS Cluster v1.0.0 is a major release that represents that this project has reached maturity and is able to perform and scale on production environment (50+ million pins and 20 nodes).
This is a breaking release, v1.0.0 cluster peers are not compatible with previous cluster peers as we have bumped the RPC protocol version (which had remained unchanged since 0.12.0).
For a full list of changes, see the CHANGELOG.