The two fields of machine learning and graphical causality arose and are developed separately. However, there is, now, cross-pollination and increasing interest in both fields to …
We study how robust current ImageNet models are to distribution shifts arising from natural variations in datasets. Most research on robustness focuses on synthetic image …
Q Xie, MT Luong, E Hovy… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5 B weakly labeled …
Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data …
We evaluate a wide range of ImageNet models with five trained human labelers. In our year- long experiment, trained humans first annotated 40,000 images from the ImageNet and …
We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different …
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have …
Although machine learning models typically experience a drop in performance on out-of- distribution data, accuracies on in-versus out-of-distribution data are widely observed to …
Traditional normalization techniques (eg, Batch Normalization and Instance Normalization) generally and simplistically assume that training and test data follow the same distribution …