A data-based perspective on transfer learning

S Jain, H Salman, A Khaddaj, E Wong… - Proceedings of the …, 2023 - openaccess.thecvf.com
It is commonly believed that more pre-training data leads to better transfer learning
performance. However, recent evidence suggests that removing data from the source …

Crowdsourcing in computer vision

A Kovashka, O Russakovsky, L Fei-Fei… - … and Trends® in …, 2016 - nowpublishers.com
Computer vision systems require large amounts of manually annotated data to properly
learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to …

Jointly optimizing 3d model fitting and fine-grained classification

YL Lin, VI Morariu, W Hsu, LS Davis - … 6-12, 2014, Proceedings, Part IV 13, 2014 - Springer
Abstract 3D object modeling and fine-grained classification are often treated as separate
tasks. We propose to optimize 3D model fitting and fine-grained classification jointly …

Fine-grained image classification by exploring bipartite-graph labels

F Zhou, Y Lin - Proceedings of the IEEE conference on computer …, 2016 - cv-foundation.org
Given a food image, can a fine-grained object recognition engine tell" which restaurant
which dish" the food belongs to? Such ultra-fine grained image recognition is the key for …

When does bias transfer in transfer learning?

H Salman, S Jain, A Ilyas, L Engstrom, E Wong… - arXiv preprint arXiv …, 2022 - arxiv.org
Using transfer learning to adapt a pre-trained" source model" to a downstream" target task"
can dramatically increase performance with seemingly no downside. In this work, we …

Semi-supervised contrastive learning with similarity co-calibration

Y Zhang, X Zhang, J Li, RC Qiu, H Xu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Semi-supervised learning acts as an effective way to leverage massive unlabeled data. In
this paper, we propose a novel training strategy, termed as Semi-supervised Contrastive …

Boosting knowledge distillation via intra-class logit distribution smoothing

C Li, G Cheng, J Han - … on Circuits and Systems for Video …, 2023 - ieeexplore.ieee.org
Previous arts built an intimate link between knowledge distillation (KD) and label smoothing
(LS) that they both impose regularization on the model training. In this paper, we delve …

When to learn what: Model-adaptive data augmentation curriculum

C Hou, J Zhang, T Zhou - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Data augmentation (DA) is widely used to improve the generalization of neural networks by
enforcing the invariances and symmetries to pre-defined transformations applied to input …

DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration

N Zhou, J Chen, D Huang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
The visual models pretrained on large-scale benchmarks encode general knowledge and
prove effective in building more powerful representations for downstream tasks. Most …

A survey of efficient fine-tuning methods for Vision-Language Models—Prompt and Adapter

J Xing, J Liu, J Wang, L Sun, X Chen, X Gu… - Computers & Graphics, 2024 - Elsevier
Abstract Vision Language Model (VLM) is a popular research field located at the fusion of
computer vision and natural language processing (NLP). With the emergence of transformer …