Bootstrap Generalization Ability from Loss Landscape Perspective H Chen, S Shao, Z Wang, Z Shang, J Chen, X Ji, X Wu ECCV workshop, 500-517, 2022 | 17 | 2022 |
Catch-up distillation: You only need to train once for accelerating sampling S Shao, X Dai, S Yin, L Li, H Chen, Y Hu arXiv preprint arXiv:2305.10769, 2023 | 14 | 2023 |
A Bi-Stream hybrid model with MLPBlocks and self-attention mechanism for EEG-based emotion recognition W Li, Y Tian, B Hou, J Dong, S Shao, A Song Biomedical Signal Processing and Control 86, 105223, 2023 | 11 | 2023 |
Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching S Shao, Z Yin, M Zhou, X Zhang, Z Shen CVPR highlight, 2023 | 8 | 2023 |
DiffuseExpand: Expanding dataset for 2D medical image segmentation using diffusion models S Shao, X Yuan, Z Huang, Z Qiu, S Wang, K Zhou IJCAI workshop, 2023 | 8 | 2023 |
Your diffusion model is secretly a certifiably robust classifier H Chen, Y Dong, S Shao, Z Hao, X Yang, H Su, J Zhu arXiv preprint arXiv:2402.02316, 2024 | 7 | 2024 |
BiSMSM: A Hybrid MLP-Based Model of Global Self-Attention Processes for EEG-Based Emotion Recognition W Li, Y Tian, B Hou, J Dong, S Shao ICANN, 37-48, 2022 | 7 | 2022 |
What Role Does Data Augmentation Play in Knowledge Distillation? W Li, S Shao, W Liu, Z Qiu, Z Zhu, W Huan ACCV, 2204-2220, 2022 | 7 | 2022 |
Attention-based intrinsic reward mixing network for credit assignment in multi-agent reinforcement learning W Li, W Liu, S Shao, S Huang, A Song IEEE Transactions on Games, 2023 | 6 | 2023 |
Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-Resolution J Xie, L Gong, S Shao, S Lin, L Luo Neurocomputing 554, 126592, 2023 | 4 | 2023 |
MS-FRAN: a novel multi-source domain adaptation method for EEG-based emotion recognition W Li, W Huan, S Shao, B Hou, A Song IEEE Journal of Biomedical and Health Informatics, 2023 | 3 | 2023 |
AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network W Li, W Liu, S Shao, S Huang ACML, 579-594, 2023 | 3 | 2023 |
Teaching What You Should Teach: A Data-Based Distillation Method S Shao, H Chen, Z Huang, L Gong, S Wang, X Wu IJCAI, 2022 | 3 | 2022 |
Multi-perspective analysis on data augmentation in knowledge distillation W Li, S Shao, Z Qiu, A Song Neurocomputing 583, 127516, 2024 | 2 | 2024 |
Self-supervised Dataset Distillation: A Good Compression Is All You Need M Zhou, Z Yin, S Shao, Z Shen arXiv preprint arXiv:2404.07976, 2024 | 2 | 2024 |
Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation S Wang, D Zhang, Z Yan, S Shao, R Li IJCAI workshop, 2023 | 2 | 2023 |
Elucidating the Design Space of Dataset Condensation S Shao, Z Zhou, H Chen, Z Shen arXiv preprint arXiv:2404.13733, 2024 | 1 | 2024 |
Precise Knowledge Transfer via Flow Matching S Shao, Z Shen, L Gong, H Chen, X Dai arXiv preprint arXiv:2402.02012, 2024 | 1 | 2024 |
Spatial-Temporal Constraint Learning for Cross-Subject EEG-Based Emotion Recognition W Li, B Hou, S Shao, W Huan, Y Tian IJCNN, 1-8, 2023 | 1 | 2023 |
Generalized Contrastive Partial Label Learning for Cross-Subject EEG-Based Emotion Recognition W Li, L Fan, S Shao, A Song IEEE Transactions on Instrumentation and Measurement, 2024 | | 2024 |