Scaling & shifting your features: A new baseline for efficient model tuning

D Lian, D Zhou, J Feng, X Wang - Advances in Neural …, 2022 - proceedings.neurips.cc
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-
tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …

Deep model reassembly

X Yang, D Zhou, S Liu, J Ye… - Advances in neural …, 2022 - proceedings.neurips.cc
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model
Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous …

Dataset distillation via factorization

S Liu, K Wang, X Yang, J Ye… - Advances in neural …, 2022 - proceedings.neurips.cc
In this paper, we study dataset distillation (DD), from a novel perspective and introduce
a\emph {dataset factorization} approach, termed\emph {HaBa}, which is a plug-and-play …

Factorizing knowledge in neural networks

X Yang, J Ye, X Wang - European Conference on Computer Vision, 2022 - Springer
In this paper, we explore a novel and ambitious knowledge-transfer task, termed Knowledge
Factorization (KF). The core idea of KF lies in the modularization and assemblability of …

Eqmotion: Equivariant multi-agent motion prediction with invariant interaction reasoning

C Xu, RT Tan, Y Tan, S Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning to predict agent motions with relationship reasoning is important for many
applications. In motion prediction tasks, maintaining motion equivariance under Euclidean …

Disentangled representation learning

X Wang, H Chen, S Tang, Z Wu, W Zhu - arXiv preprint arXiv:2211.11695, 2022 - arxiv.org
Disentangled Representation Learning (DRL) aims to learn a model capable of identifying
and disentangling the underlying factors hidden in the observable data in representation …

Debiasing graph neural networks via learning disentangled causal substructure

S Fan, X Wang, Y Mo, C Shi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Most Graph Neural Networks (GNNs) predict the labels of unseen graphs by
learning the correlation between the input graphs and labels. However, by presenting a …

Tcgl: Temporal contrastive graph for self-supervised video representation learning

Y Liu, K Wang, L Liu, H Lan, L Lin - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Video self-supervised learning is a challenging task, which requires significant expressive
power from the model to leverage rich spatial-temporal knowledge and generate effective …

Dynamic graph neural networks under spatio-temporal distribution shift

Z Zhang, X Wang, Z Zhang, H Li… - Advances in neural …, 2022 - proceedings.neurips.cc
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities
by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to …

Deep graph reprogramming

Y Jing, C Yuan, L Ju, Y Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this paper, we explore a novel model reusing task tailored for graph neural networks
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …