G Chen, Y Lu, B Li, K Tan, Y Xiong… - IEEE/ACM …, 2019 - ieeexplore.ieee.org
RDMA is becoming prevalent because of its low latency, high throughput and low CPU overhead. However, in current datacenters, RDMA remains a single path transport which is …
Q Li, Y Gao, X Wang, H Qiu, Y Le, D Liu… - … USENIX Symposium on …, 2023 - usenix.org
Datacenter applications have been increasingly applying RDMA for the ultra-low latency and low CPU overhead. However, RDMA-capable NICs (RNICs) of different vendors and …
J Xue, MU Chaudhry, B Vamanan… - IEEE/ACM …, 2020 - ieeexplore.ieee.org
Though Remote Direct Memory Access (RDMA) promises to reduce datacenter network latencies significantly compared to TCP (eg, 10x), end-to-end congestion control in the …
This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems …
The advent of RoCE (RDMA over Converged Ethernet) has led to a significant increase in the use of RDMA in datacenter networks. To achieve good performance, RoCE requires a …
Datacenter (DC) design has been moved toward the edge computing paradigm motivated by the need of bringing cloud resources closer to end users. However, the software defined …
BC Vattikonda, G Porter, A Vahdat… - Proceedings of the 7th …, 2012 - dl.acm.org
Cloud computing is placing increasingly stringent demands on datacenter networks. Applications like MapReduce and Hadoop demand high bisection bandwidth to support …
Modern datacenter applications demand high throughput (40Gbps) and ultra-low latency (< 10 μs per hop) from the network, with low CPU overhead. Standard TCP/IP stacks cannot …
RDMA-capable networks are gaining traction with datacenter deployments due to their high throughput, low latency, CPU efficiency, and advanced features, such as remote memory …