High performance, thread-safe in-memory caching primitives for .NET
Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.1.1...v2.1.2
CmSketch
to use block-based indexing, matching Caffeine. The 64-byte blocks are the same size as x86 cache lines. This scheme exploits the hardware by reducing L1 cache misses, since each increment or frequency call is guaranteed to use data from the same cache line.CmSketch
using AVX2 intrinsics. When combined with block indexing, this is 2x faster than the original implementation in benchmarks and gives 20% better ConcurrentLfu
throughput when tested end to end.ConcurrentLfu
uses a Running value cache when comparing frequency. In the best case this reduces the number of sketch frequency calls by 50%. Improves throughput.CmSketch.Reset
, reduces reset execution time by about 40%. This is called periodically so reduces worst case rather than average ConcurrentLfu
maintenance time.ConcurrentLfu
hot path, minor latency reduction when benchmarked.ConcurrentLru
cycle count when evicting items. Prevents runaway growth when stress tested on AMD CPUs.ConcurrentLfu
disposes items created but not cached when races occur during GetOrAdd
.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.1.0...v2.1.1
ConcurrentLfu
, a .NET implementation of the W-TinyLfu admission policy. This closely follows the approach taken by the Caffeine library by Ben Manes - including buffered reads/writes and hill climbing to optimize hit rate. A ConcurrentLfuBuilder
provides integration with the existing atomic value factory and scoped value features.ConcurrentLfu
added the MpscBoundedBuffer
and StripedMpscBuffer
classes.ConcurrentLfu
added the ThreadPoolScheduler
, BackgroundThreadScheduler
and ForegroundScheduler
classes.Counter
class for fast concurrent counting, based on LongAdder by Doug Lea.ConcurrentLru
to use Counter
for all metrics and added padding to internal queue counters. This improved throughput by about 2.5x with about 10% worse latency.FastConcurrentLru
, ConcurrentLru
, FastConcurrentTLru
, ConcurrentTLru
and ConcurrentLfu
.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.0.0...v2.1.0
ICache
into ICache
, IAsyncCache
, IScopedCache
and IScopedAsyncCache
interfaces. Mixing sync and async code paths is problematic and generally discouraged. Splitting sync/async enables the most optimized code for each case. Scoped caches return Lifetime<T>
instead of values, and internally have all the boilerplate code to safely resolve races.ConcurrentLruBuilder
, providing a fluent builder API to ease creation of different cache configurations. Each cache option comes with a small performance overhead. The builder enables the developer to choose the exact combination of options needed, without any penalty from unused features.ICapacityPartition
. Default partition scheme changed from equal to 80% warm via FavorWarmPartition
, improving hit rate across all tests.ValueTask
, reducing memory allocations.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v1.1.0...v2.0.0
Trim(int itemCount)
to ICache
and all derived classesTrimExpired()
to TLRU classesCleared
and Trimmed
to ItemRemovedReason
Clear()
now has ItemRemovedReason.Cleared
instead of Evicted
Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v1.0.7...v1.1.0
Added diagnostic features to dump cache contents: