Generalized logit adjustment: Calibrating fine-tuned models by removing label bias in foundation models

B Zhu, K Tang, Q Sun, H Zhang - Advances in Neural …, 2024 - proceedings.neurips.cc
Foundation models like CLIP allow zero-shot transfer on various tasks without additional
training data. Yet, the zero-shot performance is less competitive than a fully supervised one …

Enhancing clip with clip: Exploring pseudolabeling for limited-label prompt tuning

C Menghini, A Delworth, S Bach - Advances in Neural …, 2023 - proceedings.neurips.cc
Fine-tuning vision-language models (VLMs) like CLIP to downstream tasks is often
necessary to optimize their performance. However, a major obstacle is the limited availability …

Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization

J Abdul Samadh, MH Gani, N Hussein… - Advances in …, 2024 - proceedings.neurips.cc
The promising zero-shot generalization of vision-language models such as CLIP has led to
their adoption using prompt learning for numerous downstream tasks. Previous works have …

Zero-Shot Robustification of Zero-Shot Models

D Adila, C Shin, L Cai, F Sala - The Twelfth International …, 2024 - openreview.net
Zero-shot inference is a powerful paradigm that enables the use of large pretrained models
for downstream classification tasks without further training. However, these models are …

Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization

J Hassan, H Gani, N Hussein, MU Khattak… - arXiv preprint arXiv …, 2023 - arxiv.org
The promising zero-shot generalization of vision-language models such as CLIP has led to
their adoption using prompt learning for numerous downstream tasks. Previous works have …

A universal discriminator for zero-shot generalization

H Xu, Z Lin, J Zhou, Y Zheng, Z Yang - arXiv preprint arXiv:2211.08099, 2022 - arxiv.org
Generative modeling has been the dominant approach for large-scale pretraining and zero-
shot generalization. In this work, we challenge this convention by showing that …

Zero-shot robustification of zero-shot models with foundation models

D Adila, C Shin, L Cai, F Sala - arXiv preprint arXiv:2309.04344, 2023 - arxiv.org
Zero-shot inference is a powerful paradigm that enables the use of large pretrained models
for downstream classification tasks without further training. However, these models are …

Clipood: Generalizing clip to out-of-distributions

Y Shu, X Guo, J Wu, X Wang… - … on Machine Learning, 2023 - proceedings.mlr.press
Abstract Out-of-distribution (OOD) generalization, where the model needs to handle
distribution shifts from training, is a major challenge of machine learning. Contrastive …

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

M Wortsman, G Ilharco, SY Gadre… - International …, 2022 - proceedings.mlr.press
The conventional recipe for maximizing model accuracy is to (1) train multiple models with
various hyperparameters and (2) pick the individual model which performs best on a held …

Zero-shot logit adjustment

D Chen, Y Shen, H Zhang, PHS Torr - arXiv preprint arXiv:2204.11822, 2022 - arxiv.org
Semantic-descriptor-based Generalized Zero-Shot Learning (GZSL) poses challenges in
recognizing novel classes in the test phase. The development of generative models enables …