Causal machine learning for healthcare and precision medicine

P Sanchez, JP Voisey, T Xia… - Royal Society …, 2022 - royalsocietypublishing.org
Causal machine learning (CML) has experienced increasing popularity in healthcare.
Beyond the inherent capabilities of adding domain knowledge into learning systems, CML …

Fine-tuning can distort pretrained features and underperform out-of-distribution

A Kumar, A Raghunathan, R Jones, T Ma… - arXiv preprint arXiv …, 2022 - arxiv.org
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arXiv preprint arXiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

Towards trustworthy and aligned machine learning: A data-centric survey with causality perspectives

H Liu, M Chaudhary, H Wang - arXiv preprint arXiv:2307.16851, 2023 - arxiv.org
The trustworthiness of machine learning has emerged as a critical topic in the field,
encompassing various applications and research areas such as robustness, security …

Discovering invariant rationales for graph neural networks

YX Wu, X Wang, A Zhang, X He, TS Chua - arXiv preprint arXiv …, 2022 - arxiv.org
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

Causal inference in natural language processing: Estimation, prediction, interpretation and beyond

A Feder, KA Keith, E Manzoor, R Pryzant… - Transactions of the …, 2022 - direct.mit.edu
A fundamental goal of scientific research is to learn about causal relationships. However,
despite its critical role in the life and social sciences, causality has not had the same …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

Invariance principle meets information bottleneck for out-of-distribution generalization

K Ahuja, E Caballero, D Zhang… - Advances in …, 2021 - proceedings.neurips.cc
The invariance principle from causality is at the heart of notable approaches such as
invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) …