" I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models

…, S Dev, A Taylor, D Wang, E Denton, R Brewer - Proceedings of the …, 2023 - dl.acm.org
Large language models (LLMs) trained on real-world data can inadvertently reflect harmful
societal biases, particularly toward historically marginalized communities. While previous
work has primarily focused on harms related to age and race, emerging research has shown
that biases toward disabled communities exist. This study extends prior work exploring the
existence of harms by identifying categories of LLM-perpetuated harms toward the disability
community. We conducted 19 focus groups, during which 56 participants with disabilities …

[引用][C] Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, and Robin Brewer. 2023." I wouldn't say offensive but...": Disability-centered perspectives on large …

V Gadiraju, S Kane - Proceedings of the 2023 ACM Conference on Fairness …
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References