Abstract We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as …
ABSTRACT We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, ie, low-labeling rate. GRAND++ is a …
J Zhao, Y Dong, M Ding… - Advances in neural …, 2021 - proceedings.neurips.cc
The success of graph neural networks (GNNs) largely relies on the process of aggregating information from neighbors defined by the input graph structures. Notably, message passing …
LP Xhonneux, M Qu, J Tang - International conference on …, 2020 - proceedings.mlr.press
This paper builds on the connection between graph neural networks and traditional dynamical systems. We propose continuous graph neural networks (CGNN), which …
Graph neural networks (GNNs) extends the functionality of traditional neural networks to graph-structured data. Similar to CNNs, an optimized design of graph convolution and …
Abstract Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a …
Graph neural networks (GNNs) have shown promising results across various graph learning tasks, but they often assume homophily, which can result in poor performance on …
J Gasteiger, S Weißenberger… - Advances in neural …, 2019 - proceedings.neurips.cc
Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we …
Q Chen, Y Wang, Y Wang, J Yang… - … on Machine Learning, 2022 - proceedings.mlr.press
Due to the over-smoothing issue, most existing graph neural networks can only capture limited dependencies with their inherently finite aggregation layers. To overcome this …