Convolutional bypasses are better vision transformer adapters S Jie, ZH Deng, S Chen, Z Jin arXiv preprint arXiv:2207.07039, 2022 | 128 | 2022 |
Fact: Factor-tuning for lightweight adaptation on vision transformer S Jie, ZH Deng AAAI conference on artificial intelligence (AAAI) 37 (1), 1060-1068, 2023 | 98 | 2023 |
Revisiting the parameter efficiency of adapters from the perspective of precision redundancy S Jie, H Wang, ZH Deng IEEE/CVF International Conference on Computer Vision (ICCV), 17217-17226, 2023 | 30 | 2023 |
Alleviating representational shift for continual fine-tuning S Jie, ZH Deng, Z Li IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops …, 2022 | 13 | 2022 |
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning S Jie, Y Tang, N Ding, ZH Deng, K Han, Y Wang International Conference on Machine Learning (ICML), 2024 | 5 | 2024 |
Detachedly Learn a Classifier for Class-Incremental Learning Z Li, S Jie, ZH Deng arXiv preprint arXiv:2302.11730, 2023 | 2 | 2023 |
Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning S Jie, Y Tang, J Guo, ZH Deng, K Han, Y Wang European Conference on Computer Vision (ECCV), 2024 | 1 | 2024 |
Focus your attention when few-shot classification H Wang, S Jie, Z Deng Advances in Neural Information Processing Systems (NeurIPS) 36, 2024 | | 2024 |