Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning

F Zhang, L Li, J Chen, Z Jiang, B Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2308.12043, 2023arxiv.org
With the increasing size of pre-trained language models (PLMs), fine-tuning all the
parameters in the model is not efficient, especially when there are a large number of
downstream tasks, which incur significant training and storage costs. Many parameter-
efficient fine-tuning (PEFT) approaches have been proposed, among which, Low-Rank
Adaptation (LoRA) is a representative approach that injects trainable rank decomposition
matrices into every target module. Yet LoRA ignores the importance of parameters in …
With the increasing size of pre-trained language models (PLMs), fine-tuning all the parameters in the model is not efficient, especially when there are a large number of downstream tasks, which incur significant training and storage costs. Many parameter-efficient fine-tuning (PEFT) approaches have been proposed, among which, Low-Rank Adaptation (LoRA) is a representative approach that injects trainable rank decomposition matrices into every target module. Yet LoRA ignores the importance of parameters in different modules. To address this problem, many works have been proposed to prune the parameters of LoRA. However, under limited training conditions, the upper bound of the rank of the pruned parameter matrix is still affected by the preset values. We, therefore, propose IncreLoRA, an incremental parameter allocation method that adaptively adds trainable parameters during training based on the importance scores of each module. This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead. We conduct extensive experiments on GLUE to demonstrate the effectiveness of IncreLoRA. The results show that our method owns higher parameter efficiency, especially when under the low-resource settings where our method significantly outperforms the baselines. Our code is publicly available.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果