Gradientflow: Optimizing network performance for large-scale distributed dnn training

P Sun, Y Wen, R Han, W Feng… - IEEE Transactions on Big …, 2019 - ieeexplore.ieee.org
IEEE Transactions on Big Data, 2019ieeexplore.ieee.org
It is important to scale out deep neural network (DNN) training for reducing model training
time. The high communication overhead is one of the major performance bottlenecks for
distributed DNN training across multiple GPUs. Our investigations have shown that popular
open-source DNN systems could only achieve 2.5 speedup ratio on 64 GPUs connected by
56 Gbps network. To address this problem, we propose a communication backend named
GradientFlow for distributed DNN training, and employ a set of network optimization …
It is important to scale out deep neural network (DNN) training for reducing model training time. The high communication overhead is one of the major performance bottlenecks for distributed DNN training across multiple GPUs. Our investigations have shown that popular open-source DNN systems could only achieve 2.5 speedup ratio on 64 GPUs connected by 56 Gbps network. To address this problem, we propose a communication backend named GradientFlow for distributed DNN training, and employ a set of network optimization techniques. First, we integrate ring-based allreduce, mixed-precision training, and computation/communication overlap into GradientFlow. Second, we propose lazy allreduce to improve network throughput by fusing multiple communication operations into a single one, and design coarse-grained sparse communication to reduce network traffic by only transmitting important gradient chunks. When training AlexNet and ResNet-50 on the ImageNet dataset using 512 GPUs, our approach could achieve 410.2 and 434.1 speedup ratio, respectively.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果