LLAMA: A cache/storage subsystem for modern hardware

J Levandoski, D Lomet, S Sengupta - Proceedings of the International …, 2013 - microsoft.com
Proceedings of the International Conference on Very Large Databases, VLDB …, 2013microsoft.com
ABSTRACT LLAMA is a subsystem designed for new hardware environments that supports
an API for page-oriented access methods, providing both cache and storage management.
Caching (CL) and storage (SL) layers use a common mapping table that separates a page's
logical and physical location. CL supports data updates and management updates (eg, for
index re-organization) via latch-free compare-and-swap atomic state changes on its
mapping table. SL uses the same mapping table to cope with page location changes …
Abstract
LLAMA is a subsystem designed for new hardware environments that supports an API for page-oriented access methods, providing both cache and storage management. Caching (CL) and storage (SL) layers use a common mapping table that separates a page’s logical and physical location. CL supports data updates and management updates (eg, for index re-organization) via latch-free compare-and-swap atomic state changes on its mapping table. SL uses the same mapping table to cope with page location changes produced by log structuring on every page flush. To demonstrate LLAMA’s suitability, we tailored our latch-free Bw-tree implementation to use LLAMA. The Bw-tree is a B-tree style index. Layered on LLAMA, it has higher performance and scalability using real workloads compared with BerkeleyDB’s B-tree, which is known for good performance.
microsoft.com
以上显示的是最相近的搜索结果。 查看全部搜索结果