Llama-adapter: Efficient fine-tuning of language models with zero-init attention R Zhang, J Han, C Liu, P Gao, A Zhou, X Hu, S Yan, P Lu, H Li, Y Qiao arXiv preprint arXiv:2303.16199, 2023 | 428 | 2023 |
Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners R Zhang, X Hu, B Li, S Huang, H Deng, Y Qiao, P Gao, H Li Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 103 | 2023 |