Exploring the Impact of Lay User Feedback for Improving AI Fairness

E Taka, Y Nakao, R Sonoda, T Yokota, L Luo… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2312.08064, 2023arxiv.org
Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders,
especially lay users, in fair AI development is promising yet overlooked. Recent efforts
explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of
understanding of how to integrate users' feedback into an AI model and the impacts of doing
so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost
model trained on the Home Credit dataset, and conducted offline experiments to investigate …
Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References