Bag of tricks for training deeper graph neural networks: A comprehensive benchmark study

T Chen, K Zhou, K Duan, W Zheng… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
Training deep graph neural networks (GNNs) is notoriously hard. Besides the standard
plights in training deep architectures such as vanishing gradients and overfitting, it also …

Graph neural networks inspired by classical iterative algorithms

Y Yang, T Liu, Y Wang, J Zhou, Q Gan… - International …, 2021 - proceedings.mlr.press
Despite the recent success of graph neural networks (GNN), common architectures often
exhibit significant limitations, including sensitivity to oversmoothing, long-range …

Descent steps of a relation-aware energy produce heterogeneous graph neural networks

H Ahn, Y Yang, Q Gan, T Moon… - Advances in Neural …, 2022 - proceedings.neurips.cc
Heterogeneous graph neural networks (GNNs) achieve strong performance on node
classification tasks in a semi-supervised learning setting. However, as in the simpler …

A review of challenges and solutions in the design and implementation of deep graph neural networks

A Mohi ud din, S Qureshi - International Journal of Computers and …, 2023 - Taylor & Francis
The study of graph neural networks has revealed that they can unleash new applications in
a variety of disciplines using such a basic process that we cannot imagine in the context of …

Transformers from an optimization perspective

Y Yang, DP Wipf - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Deep learning models such as the Transformer are often constructed by heuristics and
experience. To provide a complementary foundation, in this work we study the following …

From hypergraph energy functions to hypergraph neural networks

Y Wang, Q Gan, X Qiu, X Huang… - … on Machine Learning, 2023 - proceedings.mlr.press
Hypergraphs are a powerful abstraction for representing higher-order interactions between
entities of interest. To exploit these relationships in making downstream predictions, a …

MuseGNN: Interpretable and convergent graph neural network layers at scale

H Jiang, R Liu, X Yan, Z Cai, M Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Among the many variants of graph neural network (GNN) architectures capable of modeling
data with cross-instance relations, an important subclass involves layers designed such that …

Implicit vs unfolded graph neural networks

Y Yang, T Liu, Y Wang, Z Huang, D Wipf - arXiv preprint arXiv:2111.06592, 2021 - arxiv.org
It has been observed that graph neural networks (GNN) sometimes struggle to maintain a
healthy balance between the efficient modeling long-range dependencies across nodes …

[HTML][HTML] Does your graph need a confidence boost? convergent boosted smoothing on graphs with tabular node features

J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang… - 2021 - amazon.science
For supervised learning with tabular data, decision tree ensembles produced via boosting
techniques generally dominate real-world applications involving iid training/test sets …

Efficient link prediction via gnn layers induced by negative sampling

Y Wang, X Hu, Q Gan, X Huang, X Qiu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad
categories. First, node-wise architectures pre-compute individual embeddings for each node …