作者
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
发表日期
2021/7/1
研讨会论文
International Conference on Machine Learning
页码范围
6565-6576
出版商
PMLR
简介
As machine learning methods are deployed in real-world settings such as healthcare, legal systems, and social science, it is crucial to recognize how they shape social biases and stereotypes in these sensitive decision-making processes. Among such real-world deployments are large-scale pretrained language models (LMs) that can be potentially dangerous in manifesting undesirable representational biases-harmful biases resulting from stereotyping that propagate negative generalizations involving gender, race, religion, and other social constructs. As a step towards improving the fairness of LMs, we carefully define several sources of representational biases before proposing new benchmarks and metrics to measure them. With these tools, we propose steps towards mitigating social biases during text generation. Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information for high-fidelity text generation, thereby pushing forward the performance-fairness Pareto frontier.
学术搜索中的文章
PP Liang, C Wu, LP Morency, R Salakhutdinov - International Conference on Machine Learning, 2021
PP Liang, C Wu, LP Morency, R Salakhutdinov - Preprint posted online June, 2021
PP Liang, C Wu, LP Morency, R Salakhutdinov - arXiv preprint arXiv:2106.13219