Placing a human in the loop may help abate the risks of deploying AI systems in safety- critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
Recently, learning with soft labels has been shown to achieve better performance than learning with hard labels in terms of model generalization, calibration, and robustness …
QZ Chen, AX Zhang - Proceedings of the ACM on Human-Computer …, 2023 - dl.acm.org
When groups of people are tasked with making a judgment, the issue of uncertainty often arises. Existing methods to reduce uncertainty typically focus on iteratively improving …
Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more …
Contemporary vision benchmarks predominantly consider tasks on which humans can achieve near-perfect performance. However, humans are frequently presented with visual …
With the rise of increasingly powerful and user-facing NLP systems, there is growing interest in assessing whether they have a good representation of uncertainty by evaluating the …
In this paper, we address the concept of" alignment" in large language models (LLMs) through the lens of post-structuralist socio-political theory, specifically examining its parallels …
Individual human decision-makers may benefit from different forms of support to improve decision outcomes. However, a key question is which form of support will lead to accurate …
Human-annotated data plays a critical role in the fairness of AI systems, including those that deal with life-altering decisions or moderating human-created web/social media content …