The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie., full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) …
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine- tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …
The hidden costs of artificial intelligence, from natural resources and labor to privacy and freedom What happens when artificial intelligence saturates political life and depletes the …
In this provocative, consequential book, Couldry and Mejias theorize the dynamics of change in contemporary capitalism as grounded in a new form of data colonialism. They …
Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily …
T Gebru, J Krause, Y Wang, D Chen… - Proceedings of the …, 2017 - National Acad Sciences
The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race …
S Li, CH Liu, Q Lin, Q Wen, L Su… - IEEE transactions on …, 2020 - ieeexplore.ieee.org
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related …
Abstract Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only …
As the size of transformer-based models continues to grow, fine-tuning these large-scale pretrained vision models for new tasks has become increasingly parameter-intensive …