Rectifying the shortcut learning of background for few-shot learning X Luo, L Wei, L Wen, J Yang, L Xie, Z Xu, Q Tian NeurIPS 2021, 2021 | 96 | 2021 |
Channel importance matters in few-shot image classification X Luo, J Xu, Z Xu ICML 2022, 2022 | 41 | 2022 |
A closer look at few-shot classification again X Luo*, H Wu*, J Zhang, L Gao, J Xu, J Song ICML 2023, 2023 | 31 | 2023 |
Boosting few-shot classification with view-learnable contrastive learning X Luo*, Y Chen*, L Wen, L Pan, Z Xu 2021 IEEE International Conference on Multimedia and Expo (ICME), 1-6, 2021 | 31 | 2021 |
DETA: Denoised task adaptation for few-shot learning J Zhang, L Gao, X Luo, H Shen, J Song ICCV 2023, 2023 | 17 | 2023 |
Alleviating the sample selection bias in few-shot learning by removing projection to the centroid J Xu, X Luo, X Pan, Y Li, W Pei, Z Xu NeurIPS 2022, 2022 | 12 | 2022 |
Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers P Gao*, L Zhuo*, D Liu*, R Du*, X Luo*, L Qiu*, Y Zhang, C Lin, R Huang, ... arXiv preprint arXiv:2405.05945, 2024 | 10* | 2024 |
Exploring category-correlated feature for few-shot image classification J Xu, X Pan, X Luo, W Pei, Z Xu arXiv preprint arXiv:2112.07224, 2021 | 6 | 2021 |
Concatenated tensor networks for deep multi-task learning M Wang, Z Su, X Luo, Y Pan, S Zheng, Z Xu Neural Information Processing: 27th International Conference, ICONIP 2020 …, 2020 | 5 | 2020 |
3daxiesprompts: Unleashing the 3d spatial task capabilities of gpt-4v D Liu, X Dong, R Zhang, X Luo, P Gao, X Huang, Y Gong, Z Wang arXiv preprint arXiv:2312.09738, 2023 | 4 | 2023 |
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model C Chen, J Zhu, X Luo, H Shen, L Gao, J Song arXiv preprint arXiv:2403.08350, 2024 | 2 | 2024 |
Less is More: On the Feature Redundancy of Pretrained Models When Transferring to Few-shot Tasks X Luo, D Zou, L Gao, Z Xu, J Song arXiv preprint arXiv:2310.03843, 2023 | 1 | 2023 |
Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT L Zhuo, R Du, H Xiao, Y Li, D Liu, R Huang, W Liu, L Zhao, FY Wang, ... arXiv preprint arXiv:2406.18583, 2024 | | 2024 |