Hpdedup: A hybrid prioritized data deduplication mechanism for primary storage in the cloud

H Wu, C Wang, Y Fu, S Sakr, L Zhu, K Lu - arXiv preprint arXiv:1702.08153, 2017 - arxiv.org
arXiv preprint arXiv:1702.08153, 2017arxiv.org
Eliminating duplicate data in primary storage of clouds increases the cost-efficiency of cloud
service providers as well as reduces the cost of users for using cloud services. Existing
primary deduplication techniques either use inline caching to exploit locality in primary
workloads or use post-processing deduplication running in system idle time to avoid the
negative impact on I/O performance. However, neither of them works well in the cloud
servers running multiple services or applications for the following two reasons: Firstly, the …
Eliminating duplicate data in primary storage of clouds increases the cost-efficiency of cloud service providers as well as reduces the cost of users for using cloud services. Existing primary deduplication techniques either use inline caching to exploit locality in primary workloads or use post-processing deduplication running in system idle time to avoid the negative impact on I/O performance. However, neither of them works well in the cloud servers running multiple services or applications for the following two reasons: Firstly, the temporal locality of duplicate data writes may not exist in some primary storage workloads thus inline caching often fails to achieve good deduplication ratio. Secondly, the post-processing deduplication allows duplicate data to be written into disks, therefore does not provide the benefit of I/O deduplication and requires high peak storage capacity. This paper presents HPDedup, a Hybrid Prioritized data Deduplication mechanism to deal with the storage system shared by applications running in co-located virtual machines or containers by fusing an inline and a post-processing process for exact deduplication. In the inline deduplication phase, HPDedup gives a fingerprint caching mechanism that estimates the temporal locality of duplicates in data streams from different VMs or applications and prioritizes the cache allocation for these streams based on the estimation. HPDedup also allows different deduplication threshold for streams based on their spatial locality to reduce the disk fragmentation. The post-processing phase removes duplicates whose fingerprints are not able to be cached due to the weak temporal locality from disks. Our experimental results show that HPDedup clearly outperforms the state-of-the-art primary storage deduplication techniques in terms of inline cache efficiency and primary deduplication efficiency.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果