Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap Y Wang*, Q Zhang*, Y Wang, J Yang, Z Lin ICLR 2022, 2022 | 105 | 2022 |
Dissecting the diffusion process in linear graph convolutional networks Y Wang, Y Wang, J Yang, Z Lin NeurIPS 2021, 2021 | 67 | 2021 |
Jailbreak and guard aligned language models with only few in-context demonstrations Z Wei, Y Wang, Y Wang arXiv preprint arXiv:2310.06387, 2023 | 59 | 2023 |
How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders Q Zhang*, Y Wang*, Y Wang NeurIPS 2022 Spotlight, 2022 | 39 | 2022 |
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture Y Mo, D Wu, Y Wang, Y Guo, Y Wang NeurIPS 2022 Spotlight, 2022 | 38 | 2022 |
Optimization-induced graph implicit nonlinear diffusion Q Chen, Y Wang, Y Wang, J Yang, Z Lin ICML 2022, 2022 | 30 | 2022 |
Residual relaxation for multi-view representation learning Y Wang, Z Geng, F Jiang, C Li, Y Wang, J Yang, Z Lin NeurIPS 2021, 2021 | 30 | 2021 |
CFA: Class-wise Calibrated Fair Adversarial Training Z Wei, Y Wang, Y Guo, Y Wang CVPR 2023, 2023 | 27 | 2023 |
ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond X Guo*, Y Wang*, T Du*, Y Wang ICLR 2023, 2023 | 25 | 2023 |
G CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters M Li, X Guo, Y Wang, Y Wang, Z Lin ICML 2022, 2022 | 20 | 2022 |
Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism Z Zhuo*, Y Wang*, J Ma, Y Wang ICLR 2023, 2023 | 15 | 2023 |
A Message Passing Perspective on Learning Dynamics of Contrastive Learning Y Wang*, Q Zhang*, T Du, J Yang, Z Lin, Y Wang ICLR 2023, 2023 | 11 | 2023 |
Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning R Luo*, Y Wang*, Y Wang ICLR 2023, 2023 | 10 | 2023 |
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors Q Wang*, Y Wang*, H Zhu, Y Wang NeurIPS 2022 Spotlight, 2022 | 9 | 2022 |
Fooling Adversarial Training with Inducing Noise Z Wang*, Y Wang*, Y Wang arXiv preprint arXiv:2111.10130, 2021 | 9 | 2021 |
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective Y Wang*, L Li*, J Yang, Z Lin, Y Wang NeurIPS 2023, 2023 | 8 | 2023 |
Rethinking Weak Supervision in Helping Contrastive Learning J Cui*, W Huang*, Y Wang*, Y Wang ICML 2023, 2023 | 8 | 2023 |
A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training Y Wang, Y Wang, J Yang, Z Lin ICLR 2022, 2022 | 8 | 2022 |
On the Generalization of Multi-modal Contrastive Learning Q Zhang*, Y Wang*, Y Wang ICML 2023, 2023 | 7 | 2023 |
Train once, and decode as you like C Tian, Y Wang, H Cheng, Y Lian, Z Zhang COLING 2020, 2020 | 7 | 2020 |