过去一年中添加的文章,按日期排序

Zen: Near-optimal sparse tensor synchronization for distributed DNN training

Z Wang, Z Xu, A Shrivastava, TS Ng - arXiv preprint arXiv:2309.13254, 2023 - arxiv.org
Z Wang, Z Xu, A Shrivastava, TS Ng
arXiv preprint arXiv:2309.13254, 2023arxiv.org
273 天前 - Distributed training is the de facto standard to scale up the training of Deep Neural
Networks (DNNs) with multiple GPUs. The performance bottleneck of distributed training lies
in communications for gradient synchronization. Recently, practitioners have observed
sparsity in gradient tensors, suggesting the potential to reduce the traffic volume in
communication and improve end-to-end training efficiency. Yet, the optimal communication
scheme to fully leverage sparsity is still missing. This paper aims to address this gap. We first …
Distributed training is the de facto standard to scale up the training of Deep Neural Networks (DNNs) with multiple GPUs. The performance bottleneck of distributed training lies in communications for gradient synchronization. Recently, practitioners have observed sparsity in gradient tensors, suggesting the potential to reduce the traffic volume in communication and improve end-to-end training efficiency. Yet, the optimal communication scheme to fully leverage sparsity is still missing. This paper aims to address this gap. We first analyze the characteristics of sparse tensors in popular DNN models to understand the fundamentals of sparsity. We then systematically explore the design space of communication schemes for sparse tensors and find the optimal one. % We then find the optimal scheme based on the characteristics by systematically exploring the design space. We also develop a gradient synchronization system called Zen that approximately realizes it for sparse tensors. We demonstrate that Zen can achieve up to 5.09x speedup in communication time and up to 2.48x speedup in training throughput compared to the state-of-the-art methods.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果