A distributed multi-GPU system for large-scale node embedding at Tencent

W Wei, Y Wang, P Gao, S Sun, D Yu - arXiv preprint arXiv:2005.13789, 2020 - arxiv.org
W Wei, Y Wang, P Gao, S Sun, D Yu
arXiv preprint arXiv:2005.13789, 2020arxiv.org
Real-world node embedding applications often contain hundreds of billions of edges with
high-dimension node features. Scaling node embedding systems to efficiently support these
applications remains a challenging problem. In this paper we present a high-performance
multi-GPU node embedding system. It uses model parallelism to split node embeddings
onto each GPU's local parameter server, and data parallelism to train these embeddings on
different edge samples in parallel. We propose a hierarchical data partitioning strategy and …
Real-world node embedding applications often contain hundreds of billions of edges with high-dimension node features. Scaling node embedding systems to efficiently support these applications remains a challenging problem. In this paper we present a high-performance multi-GPU node embedding system. It uses model parallelism to split node embeddings onto each GPU's local parameter server, and data parallelism to train these embeddings on different edge samples in parallel. We propose a hierarchical data partitioning strategy and an embedding training pipeline to optimize both communication and memory usage on a GPU cluster. With the decoupled design of CPU tasks (random walk) and GPU tasks (embedding training), our system is highly flexible and can fully utilize all computing resources on a GPU cluster. Comparing with the current state-of-the-art multi-GPU single-node embedding system, our system achieves 5.9x-14.4x speedup on average with competitive or better accuracy on open datasets. Using 40 NVIDIA V100 GPUs on a network with almost three hundred billion edges and more than one billion nodes, our implementation requires only 3 minutes to finish one training epoch.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果