Generative adversarial networks (GANs) have recently become a hot research topic; however, they have been studied since 2014, and a large number of algorithms have been …
J Cha, S Chun, K Lee, HC Cho… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains. Although a variety of DG …
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the …
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task …
The standard objective in machine learning is to train a single model for all users. However, in many learning scenarios, such as cloud computing and federated learning, it is possible …
We study how neural networks trained by gradient descent extrapolate, ie, what they learn outside the support of the training distribution. Previous works report mixed empirical results …
H Wang, S Ge, Z Lipton… - Advances in Neural …, 2019 - proceedings.neurips.cc
Despite their renowned in-domain predictive power, convolutional neural networks are known to rely more on high-frequency patterns that humans deem superficial than on low …
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these approaches heavily rely on annotated data which …
X Peng, Q Bai, X Xia, Z Huang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Conventional unsupervised domain adaptation (UDA) assumes that training data are sampled from a single domain. This neglects the more practical scenario where training data …