AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or …
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain (eg, photos) and unlabeled data from a target domain (eg, sketches) are used to …
Z Zhang, W Chen, H Cheng, Z Li… - Advances in Neural …, 2022 - proceedings.neurips.cc
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source pretrained model is adapted to the target domain without access …
Machine learning models often perform poorly on subgroups that are underrepresented in the training data. Yet, little is understood on the variation in mechanisms that cause …
Self-training and contrastive learning have emerged as leading techniques for incorporating unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it …
Y Jiang, V Veitch - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain (s) where the training data …
Contrastive learning is a highly effective method for learning representations from unlabeled data. Recent works show that contrastive representations can transfer across domains …
The increasing reliance on ML models in high-stakes tasks has raised a major concern about fairness violations. Although there has been a surge of work that improves algorithmic …