A survey of resource-efficient llm and multimodal foundation models

M Xu, W Yin, D Cai, R Yi, D Xu, Q Wang, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large foundation models, including large language models (LLMs), vision transformers
(ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …

Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained Models for Spatiotemporal Modeling

WGC Bandara, VM Patel - arXiv preprint arXiv:2403.06978, 2024 - arxiv.org
In this paper, we introduce Attention Prompt Tuning (APT)-a computationally efficient variant
of prompt tuning for video-based applications such as action recognition. Prompt tuning …

Towards Multi-modal Transformers in Federated Learning

G Sun, M Mendieta, A Dutta, X Li, C Chen - arXiv preprint arXiv …, 2024 - arxiv.org
Multi-modal transformers mark significant progress in different domains, but siloed high-
quality data hinders their further improvement. To remedy this, federated learning (FL) has …

Learn What You Need in Personalized Federated Learning

K Lv, R Ye, X Huang, J Yang, S Chen - arXiv preprint arXiv:2401.08327, 2024 - arxiv.org
Personalized federated learning aims to address data heterogeneity across local clients in
federated learning. However, current methods blindly incorporate either full model …

Personalized Federated Aggregation Algorithm based on Local Attention Mechanism

Y Zeng, Y Yang, T Yao, WW He - 2023 IEEE 14th International …, 2023 - ieeexplore.ieee.org
Federated learning is a distributed learning approach that balances data privacy and
collaborative learning. To address the impact of non-IID (non-Independently and Identically …

[PDF][PDF] Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained Models for Action Recognition

WGC Bandara, VM Patel - brosdocs.net
In this paper, we introduce Attention Prompt Tuning (APT)–a computationally efficient variant
of prompt tuning for video-based applications such as action recognition. Prompt tuning …