Merging Vision Transformers from Different Tasks and Domains

P Ye, C Huang, M Shen, T Chen, Y Huang… - arXiv preprint arXiv …, 2023 - arxiv.org
P Ye, C Huang, M Shen, T Chen, Y Huang, Y Zhang, W Ouyang
arXiv preprint arXiv:2312.16240, 2023arxiv.org
This work targets to merge various Vision Transformers (ViTs) trained on different tasks (ie,
datasets with different object categories) or domains (ie, datasets with the same categories
but different environments) into one unified model, yielding still good performance on each
task or domain. Previous model merging works focus on either CNNs or NLP models,
leaving the ViTs merging research untouched. To fill this gap, we first explore and find that
existing model merging methods cannot well handle the merging of the whole ViT models …
This work targets to merge various Vision Transformers (ViTs) trained on different tasks (i.e., datasets with different object categories) or domains (i.e., datasets with the same categories but different environments) into one unified model, yielding still good performance on each task or domain. Previous model merging works focus on either CNNs or NLP models, leaving the ViTs merging research untouched. To fill this gap, we first explore and find that existing model merging methods cannot well handle the merging of the whole ViT models and still have improvement space. To enable the merging of the whole ViT, we propose a simple-but-effective gating network that can both merge all kinds of layers (e.g., Embedding, Norm, Attention, and MLP) and select the suitable classifier. Specifically, the gating network is trained by unlabeled datasets from all the tasks (domains), and predicts the probability of which task (domain) the input belongs to for merging the models during inference. To further boost the performance of the merged model, especially when the difficulty of merging tasks increases, we design a novel metric of model weight similarity, and utilize it to realize controllable and combined weight merging. Comprehensive experiments on kinds of newly established benchmarks, validate the superiority of the proposed ViT merging framework for different tasks and domains. Our method can even merge beyond 10 ViT models from different vision tasks with a negligible effect on the performance of each task.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果