TR-02-6.pdf

LIRS: an efficient low inter-reference recency set replacement to improve
buffer cache performance

Song Jiang and Xiaodong Zhang

Proceedings of the 2002 ACM SIGMETRICS Conference on Measurement and 
Modeling of Computer Systems, (SIIMETRICS'02), 
Marina Del Rey, California, June 15-19, 2002.



                               Abstract

Although LRU replacement policy has been commonly used in buffer cache
management, it is well known for its inability to cope with weak
locality of access patterns.  Previous work, such as LRU-K and 2Q,
attempt to enhance LRU capacity by making use of additional history
information of block references other than only the recency information
used in LRU. These algorithms greatly increase complexity and/or can not
consistently provide performance improvement.  Many recently proposed
policies, such as UBM and SEQ, improve replacement performance by
exploiting access regularities in references. Determined by whether
certain regularities are detected, the schemes switch back and forth
between LRU and other policies such as MRU (Most Recently Used).  These
schemes only address LRU problems on certain specific and well-defined
cases such as sequences and loops, and their performance improvement
relies on the appearance of expected and detectable reference regularities
in references.  Motivated by the limits of previous studies, we propose
an efficient buffer cache replacement policy, called Low Inter-reference
Recency Set (LIRS).  LIRS fundamentally addresses the limits of LRU by
using recency to evaluate Inter-Reference Recency (IRR) for making a
replacement decision. This is in contrast to what LRU does: directly
using recency to predict next reference timing.  At the same time,
LIRS almost retains the same simple assumption of LRU to predict future
access behavior of blocks.  Our objectives are to effectively address the
limits of LRU in the general sense, to retain the low overhead merit of
LRU, and to outperform those replacement policies relying on the access
regularity detections.

Conducting simulations with a variety of traces and a wide range of cache
sizes, we show that LIRS significantly outperforms LRU, and outperforms
other existing replacement algorithms in most cases.  Furthermore,
we show that the additional cost for implementing LIRS is trivial in
comparison with LRU.