关注
Shibo Jie
标题
引用次数
引用次数
年份
Convolutional bypasses are better vision transformer adapters
S Jie, ZH Deng, S Chen, Z Jin
arXiv preprint arXiv:2207.07039, 2022
1282022
Fact: Factor-tuning for lightweight adaptation on vision transformer
S Jie, ZH Deng
AAAI conference on artificial intelligence (AAAI) 37 (1), 1060-1068, 2023
982023
Revisiting the parameter efficiency of adapters from the perspective of precision redundancy
S Jie, H Wang, ZH Deng
IEEE/CVF International Conference on Computer Vision (ICCV), 17217-17226, 2023
302023
Alleviating representational shift for continual fine-tuning
S Jie, ZH Deng, Z Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops …, 2022
132022
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
S Jie, Y Tang, N Ding, ZH Deng, K Han, Y Wang
International Conference on Machine Learning (ICML), 2024
52024
Detachedly Learn a Classifier for Class-Incremental Learning
Z Li, S Jie, ZH Deng
arXiv preprint arXiv:2302.11730, 2023
22023
Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
S Jie, Y Tang, J Guo, ZH Deng, K Han, Y Wang
European Conference on Computer Vision (ECCV), 2024
12024
Focus your attention when few-shot classification
H Wang, S Jie, Z Deng
Advances in Neural Information Processing Systems (NeurIPS) 36, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–8