Fair transfer learning with missing protected attributes

A Coston, KN Ramamurthy, D Wei… - Proceedings of the …, 2019 - dl.acm.org
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019dl.acm.org
Risk assessment is a growing use for machine learning models. When used in high-stakes
applications, especially ones regulated by anti-discrimination laws or governed by societal
norms for fairness, it is important to ensure that learned models do not propagate and scale
any biases that may exist in training data. In this paper, we add on an additional challenge
beyond fairness: unsupervised domain adaptation to covariate shift between a source and
target distribution. Motivated by the real-world problem of risk assessment in new markets for …
Risk assessment is a growing use for machine learning models. When used in high-stakes applications, especially ones regulated by anti-discrimination laws or governed by societal norms for fairness, it is important to ensure that learned models do not propagate and scale any biases that may exist in training data. In this paper, we add on an additional challenge beyond fairness: unsupervised domain adaptation to covariate shift between a source and target distribution. Motivated by the real-world problem of risk assessment in new markets for health insurance in the United States and mobile money-based loans in East Africa, we provide a precise formulation of the machine learning with covariate shift and score parity problem. Our formulation focuses on situations in which protected attributes are not available in either the source or target domain. We propose two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariate shift (TFCS) which does not require protected attributes in the source domain. We empirically demonstrate their efficacy in two applications.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果