The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (ie, winning tickets) that can be trained in isolation to match full accuracy …
Sparse general matrix-matrix multiplication (SpGEMM) is one of the most fundamental building blocks in sparse linear solvers, graph processing frameworks and machine learning …
Important workloads, such as machine learning and graph analytics applications, heavily involve sparse linear algebra operations. These operations use sparse matrix compression …
C Yang, A Buluç, JD Owens - European Conference on Parallel …, 2018 - Springer
We implement two novel algorithms for sparse-matrix dense-matrix multiplication (SpMM) on the GPU. Our algorithms expect the sparse input in the popular compressed-sparse-row …
Graph Convolutional Networks (GCNs) are pivotal in extracting latent information from graph data across various domains, yet their acceleration on mainstream GPUs is challenged by …
HU Manzoor, A Jafri, A Zoha - Internet of Things, 2024 - Elsevier
Federated Learning (FL) enhances predictive accuracy in load forecasting by integrating data from distributed load networks while ensuring data privacy. However, the …
The latest hardware accelerators proposed for graph applications primarily focus on graph neural networks (GNNs) and graph mining. High-level graph reasoning tasks, such as graph …
Graph representation learning aims to learn low-dimensional node embeddings for graphs. It is used in several real-world applications such as social network analysis and large-scale …
Tiling is a key technique to reduce data movement in matrix computations. While tiling is well understood and widely used for dense matrix/tensor computations, effective tiling of sparse …