Traditional machine learning paradigms are based on the assumption that both training and test data follow the same statistical pattern, which is mathematically referred to as …
J Liu, T Wang, P Cui… - Advances in Neural …, 2024 - proceedings.neurips.cc
Different distribution shifts require different algorithmic and operational interventions. Methodological research must be grounded by the specific shifts they address. Although …
A major challenge to out-of-distribution generalization is reliance on spurious features— patterns that are predictive of the class label in the training data distribution, but not causally …
Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance …
While deep learning models have shown remarkable performance in various tasks, they are susceptible to learning non-generalizable _spurious features_ rather than the core features …
H Yu, J Liu, X Zhang, J Wu, P Cui - arXiv preprint arXiv:2403.01874, 2024 - arxiv.org
Machine learning models, while progressively advanced, rely heavily on the IID assumption, which is often unfulfilled in practice due to inevitable distribution shifts. This renders them …
Machine learning models often fail to generalize well under distributional shifts. Understanding and overcoming these failures have led to a research field of Out-of …
Abstract Machine learning models often perform poorly under subpopulation shifts in the data distribution. Developing methods that allow machine learning models to better …
Machine learning algorithms learned from data with skewed distributions usually suffer from poor generalization, especially when minority classes matter as much as, or even more than …