Algorithmic fairness in artificial intelligence for medicine and healthcare

RJ Chen, JJ Wang, DFK Williamson, TY Chen… - Nature biomedical …, 2023 - nature.com
In healthcare, the development and deployment of insufficiently fair systems of artificial
intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models …

Last layer re-training is sufficient for robustness to spurious correlations

P Kirichenko, P Izmailov, AG Wilson - arXiv preprint arXiv:2204.02937, 2022 - arxiv.org
Neural network classifiers can largely rely on simple spurious features, such as
backgrounds, to make predictions. However, even in these cases, we show that they still …

Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools

E Black, R Naidu, R Ghani, K Rodolfa, D Ho… - Proceedings of the 3rd …, 2023 - dl.acm.org
While algorithmic fairness is a thriving area of research, in practice, mitigating issues of bias
often gets reduced to enforcing an arbitrarily chosen fairness metric, either by enforcing …

Towards last-layer retraining for group robustness with fewer annotations

T LaBonte, V Muthukumar… - Advances in Neural …, 2024 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …

Fair infinitesimal jackknife: Mitigating the influence of biased training data points without refitting

P Sattigeri, S Ghosh, I Padhi… - Advances in Neural …, 2022 - proceedings.neurips.cc
In consequential decision-making applications, mitigating unwanted biases in machine
learning models that yield systematic disadvantage to members of groups delineated by …

Calibrating multi-modal representations: A pursuit of group robustness without annotations

C You, Y Mint, W Dai, JS Sekhon… - 2024 IEEE/CVF …, 2024 - ieeexplore.ieee.org
Fine-tuning pre-trained vision-language models, like CLIP, has yielded success on diverse
downstream tasks. However, several pain points persist for this paradigm:(i) directly tuning …

Simplicity bias in 1-hidden layer neural networks

D Morwani, J Batra, P Jain… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recent works have demonstrated that neural networks exhibit extreme* simplicity bias*(SB).
That is, they learn* only the simplest* features to solve a task at hand, even in the presence …

Scalable infomin learning

Y Chen, Y Li, A Weller - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The task of infomin learning aims to learn a representation with high utility while being
uninformative about a specified target, with the latter achieved by minimising the mutual …

Project and probe: Sample-efficient domain adaptation by interpolating orthogonal features

AS Chen, Y Lee, A Setlur, S Levine, C Finn - arXiv preprint arXiv …, 2023 - arxiv.org
Transfer learning with a small amount of target data is an effective and common approach to
adapting a pre-trained model to distribution shifts. In some situations, target data labels may …

Exploring Orthogonality in Open World Object Detection

Z Sun, J Li, Y Mu - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Open world object detection aims to identify objects of unseen categories and incrementally
recognize them once their annotations are provided. In distinction to the traditional paradigm …