High performance, thread-safe in-memory caching primitives for .NET
ConcurrentLfu
, matching ConcurrentLru
. This closely follows the implementation in Java's Caffeine, using a port of Caffeine's hierarchical timer wheel to perform all operations in O(1) time. Expire after write, expire after access and expire after using IExpiryCalculator
can be configured via ConcurrentLfuBuilder
extension methods.ICacheExt
and IAsyncCacheExt
to enable client code compiled against .NET Standard to use the builder APIs and cache methods added since v2.0. These new methods are excluded in the base interfaces for .NET Standard, since adding them would be a breaking change.Duration
convenience methods FromHours
and FromDays
.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.4.1...v2.5.0
ConcurrentLfu
for add-remove-add of the same key.MpscBoundedBuffer.Clear()
is now thread safe, fixing a race in ConcurrentLfu
clear.ConcurrentLru
Count
and IEnumerable<KeyValuePair<K,V>>
to filter out expired items when used with time-based expiry.<nullable>enable</nullable>
, and APIs are annotated to support null reference type static analysis.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.4.0...v2.4.1
ConcurrentLru
:
IExpiryCalculator
. Expiry time may be set independently at creation, after a read and after a write.TryRemove
overloads with ConcurrentDictionary
for IAsyncCache
and AsyncAtomicFactory
, matching the implementation for ICache
added in v2.3.0. This adds two new overloads:
bool TryRemove(K key, out V value)
- enables getting the value that was removed.bool TryRemove(KeyValuePair<K, V> item)
- enables removing an item only when the key and value are the same.AsyncAtomicFactory
with a plain ConcurrentDictionary
. This is similar to storing an AsyncLazy<T>
instead of T
, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd
.AtomicFactory
value initialization logic modified to mitigate lock convoys, based on the approach given here.ConcurrentLru.Clear
to correctly handle removed items present in the internal bookkeeping data structures.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.3...v2.4.0
ConcurrentLru
eviction logic, and the transition between the cold cache and warm cache eviction routines. This prevents a variety of rare 'off by one item count' situations that could needlessly evict items when the cache is within bounds.ConcurrentLru.Clear()
to always clear the cache when items in the warm queue are marked as accessed.ConcurrentLfu
drain buffers logic to give ~5% better throughput (measured by the eviction throughput test).ConcurrentLfu
drain buffers delegate to prevent allocating a closure when scheduling maintenance.BackgroundThreadScheduler
and ThreadPoolScheduler
now use TaskScheduler.Default
, instead of implicitly using TaskScheduler.Current
(fixes CA2008).ScopedAsyncCache
now internally calls ConfigureAwait(false)
when awaiting tasks (fixes CA2007).ConcurrentLru
debugger display on .NET Standard.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.2...v2.3.3
ConcurrentLru
NullReferenceException
when expiring and disposing null values (i.e. the cached value is a reference type, and the caller cached a null value).ConcurrentLfu
handling of updates to detached nodes, caused by concurrent reads and writes. Detached nodes could be re-attached to the probation LRU pushing out fresh items prematurely, but would eventually expire since they can no longer be accessed.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.1...v2.3.2
ConcurrentDictionary
bucket count for ConcurrentLru
/ConcurrentLfu
/ClassicLru
based on the capacity
constructor arg. When the cache is at capacity, the ConcurrentDictionary
will have a prime number bucket count and a load factor of 0.75.
ConcurrentDictionary
capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing.ConcurrentDictionary
initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4 ConcurrentDictionary
grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.SingletonCache
sets the internal ConcurrentDictionary
capacity to the next prime number greater than the capacity constructor argument.Scoped
by changing ReferenceCount
to use reference equality (via object.ReferenceEquals
).SkipLocalsInit
. Minor performance gains.AtomicFactory
/AsyncAtomicFactory
/ScopedAtomicFactory
/ScopedAsyncAtomicFactory
by removing redundant reads, reducing code size.ConcurrentLfu.Count
now does not lock the underlying ConcurrentDictionary
, matching ConcurrentLru.Count
.CollectionsMarshal.AsSpan
to enumerate candidates within ConcurrentLfu.Trim
on .NET6.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.0...v2.3.1
TryRemove
overloads with ConcurrentDictionary
for ICache
(including WithAtomicGetOrAdd
). This adds two new overloads:
bool TryRemove(K key, out V value)
- enables getting the value that was removed.bool TryRemove(KeyValuePair<K, V> item)
- enables removing an item only when the key and value are the same.ConcurrentLfu.Clear()
to remove all values when using BackgroundThreadScheduler
. Previously values may be left behind after clear was called due to removed items present in window/protected/probation polluting the list of candidates to remove.ConcurrentLru.Clear()
to reset the isWarm flag. Now cache warmup behaves the same for a new instance of ConcurrentLru
vs an existing instance that was full then cleared. Previously ConcurrentLru
could have reduced capacity during warmup after calling clear, depending on the access pattern.AtomicFactory
with a plain ConcurrentDictionary
. This is similar to storing a Lazy<T>
instead of T
, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd
.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.2.1...v2.3.0
ConcurrentLru
bug where a repeated pattern of sequential key access could lead to unbounded growth.MpscBoundedBuffer
/StripedMpscBuffer
/ConcurrentLfu
on .NET6/.NETCore3.1 build targets. Reduces ConcurrentLfu
lookup latency by about 5-7% in the lookup benchmark.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.2.0...v2.2.1
ICache.GetOrAdd
enabling the value factory delegate to accept an input argument. TValue GetOrAdd<TArg> (TKey key, Func<TKey,TArg,TValue> valueFactory, TArg factoryArgument)
CancellationToken
into to an async value factory delegate is a common use case.IAsyncCache
, IScopedCache
and IAsyncScopedCache
.IValueFactory
and IAsyncValueFactory
value types.Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.1.3...v2.2.0
Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.1.2...v2.1.3