Large graph models: A perspective

Z Zhang, H Li, Z Zhang, Y Qin, X Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large models have emerged as the most recent groundbreaking achievements in artificial
intelligence, and particularly machine learning. However, when it comes to graphs, large …

Vector Quantization for Recommender Systems: A Review and Outlook

Q Liu, X Dong, J Xiao, N Chen, H Hu, J Zhu… - arXiv preprint arXiv …, 2024 - arxiv.org
Vector quantization, renowned for its unparalleled feature compression capabilities, has
been a prominent topic in signal processing and machine learning research for several …

Graph meets llms: Towards large graph models

Z Zhang, H Li, Z Zhang, Y Qin, X Wang… - NeurIPS 2023 Workshop …, 2023 - openreview.net
Large models have emerged as the most recent groundbreaking achievements in artificial
intelligence, and particularly machine learning. However, when it comes to graphs, large …

Acceleration algorithms in gnns: A survey

L Ma, Z Sheng, X Li, X Gao, Z Hao, L Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph-based
tasks. However, their inefficiency in training and inference presents challenges for scaling …

The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges

S Luan, C Hua, Q Lu, L Ma, L Wu, X Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Homophily principle,\ie {} nodes with the same labels or similar attributes are more likely to
be connected, has been commonly believed to be the main reason for the superiority of …

Structure-aware Semantic Node Identifiers for Learning on Graphs

Y Luo, Q Liu, L Shi, XM Wu - arXiv preprint arXiv:2405.16435, 2024 - arxiv.org
We present a novel graph tokenization framework that generates structure-aware, semantic
node identifiers (IDs) in the form of a short sequence of discrete codes, serving as symbolic …

Large Language Model Meets Graph Neural Network in Knowledge Distillation

S Hu, G Zou, S Yang, B Zhang, Y Chen - arXiv preprint arXiv:2402.05894, 2024 - arxiv.org
Despite recent community revelations about the advancements and potential of Large
Language Models (LLMs) in understanding Text-Attributed Graphs (TAG), the deployment of …

Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs

Y Tian, S Xu, M Li - Neural Networks, 2024 - Elsevier
Abstract While Graph Neural Networks (GNNs) have demonstrated their effectiveness in
processing non-Euclidean structured data, the neighborhood fetching of GNNs is time …

Fine-Grained Learning Behavior-Oriented Knowledge Distillation for Graph Neural Networks

K Liu, Z Huang, CD Wang, B Gao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Knowledge distillation (KD), as an effective compression technology, is used to reduce the
resource consumption of graph neural networks (GNNs) and facilitate their deployment on …

Learning the Language of Protein Structure

B Gaujac, J Donà, L Copoiu, T Atkinson… - arXiv preprint arXiv …, 2024 - arxiv.org
Representation learning and\emph {de novo} generation of proteins are pivotal
computational biology tasks. Whilst natural language processing (NLP) techniques have …