Prompt-based distribution alignment for unsupervised domain adaptation

S Bai, M Zhang, W Zhou, S Huang, Z Luan… - Proceedings of the …, 2024 - ojs.aaai.org
Recently, despite the unprecedented success of large pre-trained visual-language models
(VLMs) on a wide range of downstream tasks, the real-world unsupervised domain …

Learning to reweight for generalizable graph neural network

Z Chen, T Xiao, K Kuang, Z Lv, M Zhang… - Proceedings of the …, 2024 - ojs.aaai.org
Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing
GNNs' generalization ability will degrade when there exist distribution shifts between testing …

Metacoco: A new few-shot classification benchmark with spurious correlation

M Zhang, H Li, F Wu, K Kuang - arXiv preprint arXiv:2404.19644, 2024 - arxiv.org
Out-of-distribution (OOD) problems in few-shot classification (FSC) occur when novel
classes sampled from testing distributions differ from base classes drawn from training …

Uncovering the propensity identification problem in debiased recommendations

H Zhang, S Wang, H Li, C Zheng… - 2024 IEEE 40th …, 2024 - ieeexplore.ieee.org
In database of recommender systems, users' ratings for most items are usually missing,
resulting in selection bias when users selectively choose items to rate. To address this …

Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models

D Zhu, Z Sun, Z Li, T Shen, K Yan, S Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
Catastrophic forgetting emerges as a critical challenge when fine-tuning multi-modal large
language models (MLLMs), where improving performance on unseen tasks often leads to a …

Neural collapse anchored prompt tuning for generalizable vision-language models

D Zhu, Z Li, M Zhang, J Yuan, J Liu, K Kuang… - Proceedings of the 30th …, 2024 - dl.acm.org
Large-scale vision-language (VL) models have demonstrated remarkable generalization
capabilities for downstream tasks through prompt tuning. However, the mechanisms behind …

AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation

Z Tang, Z Lv, S Zhang, Y Zhou, X Duan, F Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Due to privacy or patent concerns, a growing number of large models are released without
granting access to their training data, making transferring their knowledge inefficient and …

Learning to Reweight for Graph Neural Network

Z Chen, T Xiao, K Kuang, Z Lv, M Zhang, J Yang… - arXiv preprint arXiv …, 2023 - arxiv.org
Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing
GNNs' generalization ability will degrade when there exist distribution shifts between testing …

Rotogbml: Towards out-of-distribution generalization for gradient-based meta-learning

M Zhang, Z Zhuang, Z Wang… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Gradient-based meta-learning (GBML) algorithms can quickly adapt to new tasks by
transferring the learned meta-knowledge while assuming that all tasks come from the same …

Large language models as visual cross-domain learners

S Chen, Y Zhang, W Jiang, J Lu, Y Zhang - arXiv preprint arXiv …, 2024 - arxiv.org
Recent advances achieved by deep learning models rely on the independent and identically
distributed assumption, hindering their applications in real-world scenarios with domain …