Towards low-latency I/O services for mixed workloads using ultra-low latency SSDs

M Liu, H Liu, C Ye, X Liao, H Jin, Y Zhang… - Proceedings of the 36th …, 2022 - dl.acm.org
Low-latency I/O services are essential for latency-sensitive workloads when they co-run with
throughput-oriented workloads in cloud data centers. Although advanced SSDs such as Intel …

{FlashShare}: Punching Through Server Storage Stack from Kernel to Firmware for {Ultra-Low} Latency {SSDs}

J Zhang, M Kwon, D Gouk, S Koh, C Lee… - … USENIX Symposium on …, 2018 - usenix.org
A modern datacenter server aims to achieve high energy efficiency by co-running multiple
applications. Some of such applications (eg, web search) are latency sensitive. Therefore …

{D2FQ}:{Device-Direct} Fair Queueing for {NVMe}{SSDs}

J Woo, M Ahn, G Lee, J Jeong - 19th USENIX Conference on File and …, 2021 - usenix.org
With modern high-performance SSDs that can handle parallel I/O requests from multiple
tenants, fair sharing of block I/O is an essential requirement for performance isolation …

Rhythm: component-distinguishable workload deployment in datacenters

L Zhao, Y Yang, K Zhang, X Zhou, T Qiu, K Li… - Proceedings of the …, 2020 - dl.acm.org
Cloud service providers improve resource utilization by co-locating latency-critical (LC)
workloads with best-effort batch (BE) jobs in datacenters. However, they usually treat an LC …

Component-distinguishable Co-location and Resource Reclamation for High-throughput Computing

L Zhao, Y Cui, Y Yang, X Zhou, T Qiu, K Li… - ACM Transactions on …, 2024 - dl.acm.org
Cloud service providers improve resource utilization by co-locating latency-critical (LC)
workloads with best-effort batch (BE) jobs in datacenters. However, they usually treat multi …

Asynchronous {I/O} stack: A low-latency kernel {I/O} stack for {Ultra-Low} latency {SSDs}

G Lee, S Shin, W Song, TJ Ham, JW Lee… - 2019 USENIX Annual …, 2019 - usenix.org
Today's ultra-low latency SSDs can deliver an I/O latency of sub-ten microseconds. With this
dramatically shrunken device time, operations inside the kernel I/O stack, which were …

Preserving i/o prioritization in virtualized oses

K Suo, Y Zhao, J Rao, L Cheng, X Zhou… - Proceedings of the 2017 …, 2017 - dl.acm.org
While virtualization helps to enable multi-tenancy in data centers, it introduces new
challenges to the resource management in traditional OSes. We find that one important …

HyperPlane: A scalable low-latency notification accelerator for software data planes

A Mirhosseini, H Golestani… - 2020 53rd Annual IEEE …, 2020 - ieeexplore.ieee.org
I/O software stacks have evolved rapidly due to the growing speed of I/O devices-including
network adapters, storage devices, and accelerators-and the emergence of microservice …

Vanguard: Increasing server efficiency via workload isolation in the storage i/o path

Y Sfakianakis, S Mavridis, A Papagiannis… - Proceedings of the …, 2014 - dl.acm.org
Server consolidation via virtualization is an essential technique for improving infrastructure
cost in modern datacenters. From the viewpoint of datacenter operators, consolidation offers …

{SKQ}: Event Scheduling for Optimizing Tail Latency in a Traditional {OS} Kernel

S Zhao, H Gu, AJ Mashtizadeh - 2021 USENIX Annual Technical …, 2021 - usenix.org
This paper presents Schedulable Kqueue (SKQ), a new design to FreeBSD Kqueue that
improves application tail latency and low-latency throughput. SKQ introduces a new …