Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior …
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing …
L Sanneman, JA Shah - International Journal of Human–Computer …, 2022 - Taylor & Francis
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to be understandable to human users. The explainable AI (XAI) literature aims to enhance …
As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep …
A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable'explanations' of their decision process, especially for …
L Sanneman, JA Shah - … Transparent Autonomous Agents and Multi-Agent …, 2020 - Springer
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to be understandable to human users. The explainable AI (XAI) literature aims to enhance …
Q Zhang, W Wang, SC Zhu - Proceedings of the AAAI conference on …, 2018 - ojs.aaai.org
Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover …
S Daftry, S Zeng, JA Bagnell… - 2016 IEEE/RSJ …, 2016 - ieeexplore.ieee.org
As robots aspire for long-term autonomous operations in complex dynamic environments, the ability to reliably take mission-critical decisions in ambiguous situations becomes critical …
Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)—both to increase the transparency of AI systems …