With the rapid advancement of processor and networking technology, and with the falling price of memory and disks, computing resources of CPU cycles, available bandwidths at different levels of inter- and external connections (memory, I/O, and Internet), and large capacity of memory and disks are increasingly plentiful to us to build all kinds of systems, both low and high ends systems. Unfortunately, the improvement of data access latency, particularly, the access latency to disks, has significantly lagged behind. The speed gap between data processing in CPU and data accessing in disks has reached to an intolerable level and will only become worse as time goes by. This bottleneck has seriously hindered the development of scalable computing systems for data-intensive applications that demand fast accesses to a huge amount of data. The only viable solution to address this problem is to build large memory buffers to cache data for reuse by taking advantage of low price and large capacity of DRAM memory, and to prefetch data for predicted future use by taking advantage of high and idle bandwidths of networks.
We are conducting research to address several system problems that need to be addressed timely. First, existing memory buffer management lacks efficient mechanisms to learn and adapt certain types of access patterns and locality behaviors of applications, causing low memory buffer hit ratios for some important applications. Second, for the same amount of data, sequential accesses are several orders of magnitude faster than random accesses in a disk. Unfortunately efforts of organizing sequential disk accesses has been largely ignored in the memory buffer management. Finally, memory buffer management in operating systems has little knowledge of data layout in disks and their physical configuration. Exposing and utilizing information at the disk level to operating systems would significantly improve the efficiency of buffer management at the memory level.
Technology Transers and Impact based on Our Research