这是第二回合。
第一轮是我想出的,然后我重新阅读了该领域的评论,这在我的脑海中更加根深蒂固。
因此,这是带有单元测试的最简单版本,表明它可以在其他版本上运行。
首先是非并行版本:
import java.util.LinkedHashMap;
import java.util.Map;
public class LruSimpleCache<K, V> implements LruCache <K, V>{
Map<K, V> map = new LinkedHashMap ( );
public LruSimpleCache (final int limit) {
map = new LinkedHashMap <K, V> (16, 0.75f, true) {
@Override
protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
return super.size() > limit;
}
};
}
@Override
public void put ( K key, V value ) {
map.put ( key, value );
}
@Override
public V get ( K key ) {
return map.get(key);
}
//For testing only
@Override
public V getSilent ( K key ) {
V value = map.get ( key );
if (value!=null) {
map.remove ( key );
map.put(key, value);
}
return value;
}
@Override
public void remove ( K key ) {
map.remove ( key );
}
@Override
public int size () {
return map.size ();
}
public String toString() {
return map.toString ();
}
}
true标志将跟踪获取和看跌的访问。请参阅JavaDocs。没有向构造函数发送true标志的removeEdelstEntry只会实现FIFO缓存(请参阅下面有关FIFO和removeEldestEntry的说明)。
这是证明它可以用作LRU缓存的测试:
public class LruSimpleTest {
@Test
public void test () {
LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );
cache.put ( 0, 0 );
cache.put ( 1, 1 );
cache.put ( 2, 2 );
cache.put ( 3, 3 );
boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
cache.put ( 4, 4 );
cache.put ( 5, 5 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == 4 || die ();
ok |= cache.getSilent ( 5 ) == 5 || die ();
cache.get ( 2 );
cache.get ( 3 );
cache.put ( 6, 6 );
cache.put ( 7, 7 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == null || die ();
ok |= cache.getSilent ( 5 ) == null || die ();
if ( !ok ) die ();
}
现在为并发版本...
软件包org.boon.cache;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {
final CacheMap<K, V>[] cacheRegions;
private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
private final ReadWriteLock readWriteLock;
private final int limit;
CacheMap ( final int limit, boolean fair ) {
super ( 16, 0.75f, true );
this.limit = limit;
readWriteLock = new ReentrantReadWriteLock ( fair );
}
protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
return super.size () > limit;
}
@Override
public V put ( K key, V value ) {
readWriteLock.writeLock ().lock ();
V old;
try {
old = super.put ( key, value );
} finally {
readWriteLock.writeLock ().unlock ();
}
return old;
}
@Override
public V get ( Object key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = super.get ( key );
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
@Override
public V remove ( Object key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = super.remove ( key );
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
public V getSilent ( K key ) {
readWriteLock.writeLock ().lock ();
V value;
try {
value = this.get ( key );
if ( value != null ) {
this.remove ( key );
this.put ( key, value );
}
} finally {
readWriteLock.writeLock ().unlock ();
}
return value;
}
public int size () {
readWriteLock.readLock ().lock ();
int size = -1;
try {
size = super.size ();
} finally {
readWriteLock.readLock ().unlock ();
}
return size;
}
public String toString () {
readWriteLock.readLock ().lock ();
String str;
try {
str = super.toString ();
} finally {
readWriteLock.readLock ().unlock ();
}
return str;
}
}
public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
int cores = Runtime.getRuntime ().availableProcessors ();
int stripeSize = cores < 2 ? 4 : cores * 2;
cacheRegions = new CacheMap[ stripeSize ];
for ( int index = 0; index < cacheRegions.length; index++ ) {
cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
}
}
public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {
cacheRegions = new CacheMap[ concurrency ];
for ( int index = 0; index < cacheRegions.length; index++ ) {
cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
}
}
private int stripeIndex ( K key ) {
int hashCode = key.hashCode () * 31;
return hashCode % ( cacheRegions.length );
}
private CacheMap<K, V> map ( K key ) {
return cacheRegions[ stripeIndex ( key ) ];
}
@Override
public void put ( K key, V value ) {
map ( key ).put ( key, value );
}
@Override
public V get ( K key ) {
return map ( key ).get ( key );
}
//For testing only
@Override
public V getSilent ( K key ) {
return map ( key ).getSilent ( key );
}
@Override
public void remove ( K key ) {
map ( key ).remove ( key );
}
@Override
public int size () {
int size = 0;
for ( CacheMap<K, V> cache : cacheRegions ) {
size += cache.size ();
}
return size;
}
public String toString () {
StringBuilder builder = new StringBuilder ();
for ( CacheMap<K, V> cache : cacheRegions ) {
builder.append ( cache.toString () ).append ( '\n' );
}
return builder.toString ();
}
}
您可以看到为什么我首先介绍了非并行版本。上面尝试创建一些条带以减少锁争用。因此,我们对键进行哈希处理,然后查找该哈希以查找实际的缓存。这使得限制大小在相当大的误差范围内更像是建议/粗略的猜测,具体取决于密钥哈希算法的分布程度。
这是测试,表明并发版本可能有效。:)(在火中进行测试是真实的方法)。
public class SimpleConcurrentLRUCache {
@Test
public void test () {
LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 1, 4, false );
cache.put ( 0, 0 );
cache.put ( 1, 1 );
cache.put ( 2, 2 );
cache.put ( 3, 3 );
boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
cache.put ( 4, 4 );
cache.put ( 5, 5 );
puts (cache);
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == 4 || die ();
ok |= cache.getSilent ( 5 ) == 5 || die ();
cache.get ( 2 );
cache.get ( 3 );
cache.put ( 6, 6 );
cache.put ( 7, 7 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
cache.put ( 8, 8 );
cache.put ( 9, 9 );
ok |= cache.getSilent ( 4 ) == null || die ();
ok |= cache.getSilent ( 5 ) == null || die ();
puts (cache);
if ( !ok ) die ();
}
@Test
public void test2 () {
LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 400, false );
cache.put ( 0, 0 );
cache.put ( 1, 1 );
cache.put ( 2, 2 );
cache.put ( 3, 3 );
for (int index =0 ; index < 5_000; index++) {
cache.get(0);
cache.get ( 1 );
cache.put ( 2, index );
cache.put ( 3, index );
cache.put(index, index);
}
boolean ok = cache.getSilent ( 0 ) == 0 || die ();
ok |= cache.getSilent ( 1 ) == 1 || die ();
ok |= cache.getSilent ( 2 ) != null || die ();
ok |= cache.getSilent ( 3 ) != null || die ();
ok |= cache.size () < 600 || die();
if ( !ok ) die ();
}
}
这是最后一篇文章。我删除的第一篇文章是LFU而不是LRU缓存。
我以为我会再去一次。我试图使用标准JDK来实现最简单的LRU缓存版本,而没有太多实现。
这是我想出的。当我实施LFU而不是LRU并添加FIFO和LRU支持时,我的第一个尝试是个灾难。然后我意识到它正在变成一个怪物。然后,我开始与没兴趣的好友约翰交谈,然后我详细介绍了如何实现LFU,LRU和FIFO,以及如何使用简单的ENUM arg进行切换,然后我意识到我真正想要的是是一个简单的LRU。因此,请忽略我之前的帖子,如果您想查看可通过枚举切换的LRU / LFU / FIFO缓存,请告诉我...不?好吧..他走了。
仅使用JDK的最简单的LRU。我同时实现了并发版本和非并发版本。
我创建了一个通用接口(这是极简主义,因此可能缺少您想要的一些功能,但它适用于我的用例,但是如果您希望看到XYZ功能,请告诉我...我是在写代码。) 。
public interface LruCache<KEY, VALUE> {
void put ( KEY key, VALUE value );
VALUE get ( KEY key );
VALUE getSilent ( KEY key );
void remove ( KEY key );
int size ();
}
您可能想知道什么是getSilent。我用它来测试。getSilent不会更改项目的LRU分数。
首先是非并发的...
import java.util.Deque;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
public class LruCacheNormal<KEY, VALUE> implements LruCache<KEY,VALUE> {
Map<KEY, VALUE> map = new HashMap<> ();
Deque<KEY> queue = new LinkedList<> ();
final int limit;
public LruCacheNormal ( int limit ) {
this.limit = limit;
}
public void put ( KEY key, VALUE value ) {
VALUE oldValue = map.put ( key, value );
/*If there was already an object under this key,
then remove it before adding to queue
Frequently used keys will be at the top so the search could be fast.
*/
if ( oldValue != null ) {
queue.removeFirstOccurrence ( key );
}
queue.addFirst ( key );
if ( map.size () > limit ) {
final KEY removedKey = queue.removeLast ();
map.remove ( removedKey );
}
}
public VALUE get ( KEY key ) {
/* Frequently used keys will be at the top so the search could be fast.*/
queue.removeFirstOccurrence ( key );
queue.addFirst ( key );
return map.get ( key );
}
public VALUE getSilent ( KEY key ) {
return map.get ( key );
}
public void remove ( KEY key ) {
/* Frequently used keys will be at the top so the search could be fast.*/
queue.removeFirstOccurrence ( key );
map.remove ( key );
}
public int size () {
return map.size ();
}
public String toString() {
return map.toString ();
}
}
该queue.removeFirstOccurrence 是一个潜在的昂贵的操作,如果你有一个大的缓存。可以以LinkedList为例,并添加从元素到节点的反向查找哈希图,以使删除操作更快速,更一致。我也开始了,但是后来意识到我不需要它。但是...也许...
调用put时,会将密钥添加到队列中。当GET被调用时,这个键才会删除并重新添加到队列的顶部。
如果您的缓存很小,并且建造物品很昂贵,那么这应该是一个不错的缓存。如果您的缓存确实很大,那么线性搜索可能会成为瓶颈,尤其是当您没有高速缓存的区域时。热点越密集,线性搜索就越快,因为热点始终位于线性搜索的顶部。无论如何……要使它更快运行,需要编写另一个具有删除操作的LinkedList,该操作具有与要查找的节点查找相反的元素,然后删除将与从哈希映射中删除键一样快。
如果您的缓存少于1000个项目,则应该可以解决。
这是一个简单的测试,以显示其实际操作。
public class LruCacheTest {
@Test
public void test () {
LruCache<Integer, Integer> cache = new LruCacheNormal<> ( 4 );
cache.put ( 0, 0 );
cache.put ( 1, 1 );
cache.put ( 2, 2 );
cache.put ( 3, 3 );
boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 0 ) == 0 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
cache.put ( 4, 4 );
cache.put ( 5, 5 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 0 ) == null || die ();
ok |= cache.getSilent ( 1 ) == null || die ();
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == 4 || die ();
ok |= cache.getSilent ( 5 ) == 5 || die ();
if ( !ok ) die ();
}
}
最后一个LRU缓存是单线程的,请不要将其包装在任何已同步的东西中。
这是并发版本的一个刺。
import java.util.Deque;
import java.util.LinkedList;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;
public class ConcurrentLruCache<KEY, VALUE> implements LruCache<KEY,VALUE> {
private final ReentrantLock lock = new ReentrantLock ();
private final Map<KEY, VALUE> map = new ConcurrentHashMap<> ();
private final Deque<KEY> queue = new LinkedList<> ();
private final int limit;
public ConcurrentLruCache ( int limit ) {
this.limit = limit;
}
@Override
public void put ( KEY key, VALUE value ) {
VALUE oldValue = map.put ( key, value );
if ( oldValue != null ) {
removeThenAddKey ( key );
} else {
addKey ( key );
}
if (map.size () > limit) {
map.remove ( removeLast() );
}
}
@Override
public VALUE get ( KEY key ) {
removeThenAddKey ( key );
return map.get ( key );
}
private void addKey(KEY key) {
lock.lock ();
try {
queue.addFirst ( key );
} finally {
lock.unlock ();
}
}
private KEY removeLast( ) {
lock.lock ();
try {
final KEY removedKey = queue.removeLast ();
return removedKey;
} finally {
lock.unlock ();
}
}
private void removeThenAddKey(KEY key) {
lock.lock ();
try {
queue.removeFirstOccurrence ( key );
queue.addFirst ( key );
} finally {
lock.unlock ();
}
}
private void removeFirstOccurrence(KEY key) {
lock.lock ();
try {
queue.removeFirstOccurrence ( key );
} finally {
lock.unlock ();
}
}
@Override
public VALUE getSilent ( KEY key ) {
return map.get ( key );
}
@Override
public void remove ( KEY key ) {
removeFirstOccurrence ( key );
map.remove ( key );
}
@Override
public int size () {
return map.size ();
}
public String toString () {
return map.toString ();
}
}
主要区别是使用ConcurrentHashMap而不是HashMap,以及使用Lock(我本来可以避免使用sync,但是...)。
我还没有对它进行过测试,但是它看起来像一个简单的LRU缓存,在需要简单的LRU映射的80%用例中,它可能会起作用。
我欢迎您提供反馈,但您为什么不使用库a,b或c。我不总是使用库的原因是因为我并不总是希望每个war文件都为80MB,所以我编写了库,因此我倾向于使libs可插入并提供足够好的解决方案,并且有人可以插入-如果愿意,可以在另一个缓存提供程序中使用。:)我从不知道何时有人可能需要Guava或ehcache或其他我不希望包含它们的东西,但是如果我使缓存可插入,我也不会排除它们。
减少依赖有其自己的奖励。我喜欢就如何使其变得更简单或更快速或两者兼而有之获得一些反馈。
另外,如果有人知道准备出发了...。
好吧..我知道您在想什么...他为什么不只是使用LinkedHashMap中的removeEldest条目,我应该但是....但是..但是..那将是一个FIFO,而不是LRU,我们尝试实施LRU。
Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {
@Override
protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
return this.size () > limit;
}
};
对于以上代码,该测试失败了...
cache.get ( 2 );
cache.get ( 3 );
cache.put ( 6, 6 );
cache.put ( 7, 7 );
ok |= cache.size () == 4 || die ( "size" + cache.size () );
ok |= cache.getSilent ( 2 ) == 2 || die ();
ok |= cache.getSilent ( 3 ) == 3 || die ();
ok |= cache.getSilent ( 4 ) == null || die ();
ok |= cache.getSilent ( 5 ) == null || die ();
所以这是使用removeEldestEntry的快速而肮脏的FIFO缓存。
import java.util.*;
public class FifoCache<KEY, VALUE> implements LruCache<KEY,VALUE> {
final int limit;
Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {
@Override
protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
return this.size () > limit;
}
};
public LruCacheNormal ( int limit ) {
this.limit = limit;
}
public void put ( KEY key, VALUE value ) {
map.put ( key, value );
}
public VALUE get ( KEY key ) {
return map.get ( key );
}
public VALUE getSilent ( KEY key ) {
return map.get ( key );
}
public void remove ( KEY key ) {
map.remove ( key );
}
public int size () {
return map.size ();
}
public String toString() {
return map.toString ();
}
}
FIFO速度很快。没有搜索。您可以将FIFO放在LRU的前面,这样可以很好地处理大多数热门条目。更好的LRU将需要与Node功能相反的元素。
无论如何...现在我写了一些代码,让我浏览其他答案,看看我错过了什么...第一次扫描它们。
O(1)
所需版本:stackoverflow.com/questions/23772102/...