Human factors in model interpretability: Industry practices, challenges, and needs

SR Hong, J Hullman, E Bertini - Proceedings of the ACM on Human …, 2020 - dl.acm.org
As the use of machine learning (ML) models in product development and data-driven
decision-making processes became pervasive in many domains, people's focus on building …

Designing a direct feedback loop between humans and Convolutional Neural Networks through local explanations

TS Sun, Y Gao, S Khaladkar, S Liu, L Zhao… - Proceedings of the …, 2023 - dl.acm.org
The local explanation provides heatmaps on images to explain how Convolutional Neural
Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has …

Closing the Knowledge Gap in Designing Data Annotation Interfaces for AI-powered Disaster Management Analytic Systems

Z Ara, H Salemi, SR Hong, Y Senarath… - Proceedings of the 29th …, 2024 - dl.acm.org
Data annotation interfaces predominantly leverage ground truth labels to guide annotators
toward accurate responses. With the growing adoption of Artificial Intelligence (AI) in domain …

ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals' Shadowing

A Ganguly, C Yan, JJY Chung, TS Sun… - Proceedings of the 37th …, 2024 - dl.acm.org
Shadowing allows artists to convey realistic volume and emotion of characters in comic
colorization. While AI technologies have the potential to improve professionals' shadowing …

3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration Design

N Kwon, TS Sun, Y Gao, L Zhao, X Wang… - Proceedings of the …, 2024 - dl.acm.org
The widespread consumer-grade 3D printers and learning resources online enable novices
to self-train in remote settings. While troubleshooting plays an essential part of 3D printing …

Scalable Oversight by Accounting for Unreliable Feedback

S Singhal, C Laidlaw, A Dragan - … Models of Human Feedback for AI …, 2024 - openreview.net
Reward functions learned from human feedback serve as the training objective for RLHF,
the current state-of-the-art approach for aligning large language models to our values; …

Towards Evaluating Exploratory Model Building Process with AutoML Systems

SR Hong, S Castelo, V D'Orazio, C Benthune… - arXiv preprint arXiv …, 2020 - arxiv.org
The use of Automated Machine Learning (AutoML) systems are highly open-ended and
exploratory. While rigorously evaluating how end-users interact with AutoML is crucial …

An ecosystem of applications for modeling political violence

A Bessa, S Castelo, R Rampin, A Santos… - Proceedings of the …, 2021 - dl.acm.org
Conflict researchers face many challenges, including (1) how to model conflicts,(2) how to
measure them,(3) how to manage their spatio-temporal character, and (4) how to handle a …

[PDF][PDF] Achieving AI Alignment with Unreliable Supervision

S Singhal, C Laidlaw, A Dragan - 2024 - digicoll.lib.berkeley.edu
AI system designers are tasked with creating technology that caters to their users, but all of
us are nuanced individuals with equally nuanced goals that we would like to achieve. With …