public class LFUCache<K,V>
A "Least Frequently Used" cache. When the cache grows too large, we discard
entries whose reference counters are below some threshold. This is cheaper
than an LRU (least recently used) algorithm, because access to the map
doesn't require any locking against concurrent activity.
Unlike classic LFU algorithms where the cache size has a fixed upper bound,
we allow it to grow beyond the target size and then do a bulk deletion of
infrequently used entries. We delete all entries whose usage counter is below
some threshold, learning the best threshold to apply from experience of
There is no synchronization. The underlying map itself is thread-safe, but we
make no attempt to prevent two concurrent threads garbage collecting at the same
time: if this happens, no harm is caused, because it's only a cache and lost
entries will simply be recomputed. Equally, the incrementing of counters isn't
synchronized because they don't need to be exact.
Adds an entry to this cache. It is assumed that the caller has
already checked that the cache doesn't already contain a suitable
entry, so we treat each entry as a new one.
If the cache is exceeds a threshold size of three times the target
size, it is rebuilt with infrequently used entries purged.
key - the key with which the specified value is to be associated.
value - a value to be associated with the specified key.