Recently," pre-training and fine-tuning''has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph …
Artificial General Intelligence (AGI) has revolutionized numerous fields, yet its integration with graph data, a cornerstone in our interconnected world, remains nascent. This paper …
J Liu, C Yang, Z Lu, J Chen, Y Li, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Emerging as fundamental building blocks for diverse artificial intelligence applications, foundation models have achieved notable success across natural language processing and …
Recommender systems commonly suffer from the long-standing data sparsity problem where insufficient user-item interaction data limits the systems' ability to make accurate …
Z Tan, R Guo, K Ding, H Liu - Proceedings of the 29th ACM SIGKDD …, 2023 - dl.acm.org
Few-shot Node Classification (FSNC) is a challenge in graph representation learning where only a few labeled nodes per class are available for training. To tackle this issue, meta …
L Xia, B Kao, C Huang - arXiv preprint arXiv:2403.01121, 2024 - arxiv.org
Graph learning has become indispensable for interpreting and harnessing relational data in diverse fields, ranging from recommendation systems to social network analysis. In this …
S Li, X Han, J Bai - arXiv preprint arXiv:2304.09595, 2023 - arxiv.org
Fine-tuning pre-trained models has recently yielded remarkable performance gains in graph neural networks (GNNs). In addition to pre-training techniques, inspired by the latest work in …
Y Zhu, J Guo, S Tang - arXiv preprint arXiv:2302.12449, 2023 - arxiv.org
Recently, much exertion has been paid to design graph self-supervised methods to obtain generalized pre-trained models, and adapt pre-trained models onto downstream tasks …