Accelerating spmm kernel with cache-first edge sampling for graph neural networks

CY Lin, L Luo, L Ceze - arXiv preprint arXiv:2104.10716, 2021 - arxiv.org
arXiv preprint arXiv:2104.10716, 2021arxiv.org
Graph neural networks (GNNs), an emerging deep learning model class, can extract
meaningful representations from highly expressive graph-structured data and are therefore
gaining popularity for wider ranges of applications. However, current GNNs suffer from the
poor performance of their sparse-dense matrix multiplication (SpMM) operator, even when
using powerful GPUs. Our analysis shows that 95% of the inference time could be spent on
SpMM when running popular GNN models on NVIDIA's advanced V100 GPU. Such SpMM …
Graph neural networks (GNNs), an emerging deep learning model class, can extract meaningful representations from highly expressive graph-structured data and are therefore gaining popularity for wider ranges of applications. However, current GNNs suffer from the poor performance of their sparse-dense matrix multiplication (SpMM) operator, even when using powerful GPUs. Our analysis shows that 95% of the inference time could be spent on SpMM when running popular GNN models on NVIDIA's advanced V100 GPU. Such SpMM performance bottleneck hinders GNNs' applicability to large-scale problems or the development of more sophisticated GNN models. To address this inference time bottleneck, we introduce ES-SpMM, a cache-first edge sampling mechanism and codesigned SpMM kernel. ES-SpMM uses edge sampling to downsize the graph to fit into GPU's shared memory. It thus reduces the computation cost and improves SpMM's cache locality. To evaluate ES-SpMM's performance, we integrated it with a popular GNN framework, DGL, and tested it using representative GNN models and datasets. Our results show that ES-SpMM outperforms the highly optimized cuSPARSE SpMM kernel by up to 4.35x with no accuracy loss and by 45.3x with less than a 1% accuracy loss.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果