Exploring vision-language models for imbalanced learning

Y Wang, Z Yu, J Wang, Q Heng, H Chen, W Ye… - International Journal of …, 2024 - Springer
Vision-language models (VLMs) that use contrastive language-image pre-training have
shown promising zero-shot classification performance. However, their performance on …

Exploring Vision-Language Models for Imbalanced Learning

Y Wang, Z Yu, J Wang, Q Heng, H Chen, W Ye… - International Journal of …, 2024 - dl.acm.org
Vision-language models (VLMs) that use contrastive language-image pre-training have
shown promising zero-shot classification performance. However, their performance on …

Exploring Vision-Language Models for Imbalanced Learning

Y Wang, Z Yu, J Wang, Q Heng, H Chen, W Ye… - arXiv e …, 2023 - ui.adsabs.harvard.edu
Abstract Vision-Language models (VLMs) that use contrastive language-image pre-training
have shown promising zero-shot classification performance. However, their performance on …

Exploring Vision-Language Models for Imbalanced Learning

Y Wang, Z Yu, J Wang, Q Heng, H Chen, W Ye… - arXiv preprint arXiv …, 2023 - arxiv.org
Vision-Language models (VLMs) that use contrastive language-image pre-training have
shown promising zero-shot classification performance. However, their performance on …