Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks. However, in many applications, it is prohibitively …
Machine learning systems generally assume that the training and testing distributions are the same. To this end, a key requirement is to develop models that can generalize to unseen …
Modern deep neural networks suffer from performance degradation when evaluated on testing data under different distributions from training data. Domain generalization aims at …
Classic machine learning methods are built on the $ iid $ assumption that training and testing data are independent and identically distributed. However, in real scenarios, the $ iid …
Q Liu, C Chen, J Qin, Q Dou… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection. While at clinical deployment, the models trained in …
The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist …
Though convolutional neural networks (CNNs) have demonstrated remarkable ability in learning discriminative features, they often generalize poorly to unseen domains. Domain …
The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using …
Y Iwasawa, Y Matsuo - Advances in Neural Information …, 2021 - proceedings.neurips.cc
This paper presents a new algorithm for domain generalization (DG),\textit {test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift. Unlike …