Given a language model (LM), maximum probability is a poor decoding objective for open- ended generation, because it produces short and repetitive text. On the other hand …
E Creager, JH Jacobsen… - … Conference on Machine …, 2021 - proceedings.mlr.press
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain …
We study the problem of learning classifiers that perform well across (known or unknown) groups of data. After observing that common worst-group-accuracy datasets suffer from …
Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological …
Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have" …
The trustworthiness of machine learning has emerged as a critical topic in the field, encompassing various applications and research areas such as robustness, security …
State-of-the-art stereo matching networks trained only on synthetic data often fail to generalize to more challenging real data domains. In this paper, we attempt to unfold an …
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, ie, when training labels are …
We consider the problem of training a classification model with group annotated training data. Recent work has established that, if there is distribution shift across different groups …