On interpretability of artificial neural networks: A survey

FL Fan, J Xiong, M Li, G Wang - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Deep learning as performed by artificial deep neural networks (DNNs) has achieved great
successes recently in many important areas that deal with text, images, videos, graphs, and …

Semantically equivalent adversarial rules for debugging NLP models

MT Ribeiro, S Singh, C Guestrin - … of the 56th Annual Meeting of …, 2018 - aclanthology.org
Complex machine learning models for NLP are often brittle, making different predictions for
input instances that are extremely similar semantically. To automatically detect this behavior …

" Why should i trust you?" Explaining the predictions of any classifier

MT Ribeiro, S Singh, C Guestrin - Proceedings of the 22nd ACM …, 2016 - dl.acm.org
Despite widespread adoption, machine learning models remain mostly black boxes.
Understanding the reasons behind predictions is, however, quite important in assessing …

The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems

L Sanneman, JA Shah - International Journal of Human–Computer …, 2022 - Taylor & Francis
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems
to be understandable to human users. The explainable AI (XAI) literature aims to enhance …

Towards accountable ai: Hybrid human-machine analyses for characterizing system failure

B Nushi, E Kamar, E Horvitz - Proceedings of the AAAI Conference on …, 2018 - ojs.aaai.org
As machine learning systems move from computer-science laboratories into the open world,
their accountability becomes a high priority problem. Accountability requires deep …

Do explanations make VQA models more predictable to a human?

A Chandrasekaran, V Prabhu, D Yadav… - arXiv preprint arXiv …, 2018 - arxiv.org
A rich line of research attempts to make deep neural networks more transparent by
generating human-interpretable'explanations' of their decision process, especially for …

A situation awareness-based framework for design and evaluation of explainable AI

L Sanneman, JA Shah - … Transparent Autonomous Agents and Multi-Agent …, 2020 - Springer
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems
to be understandable to human users. The explainable AI (XAI) literature aims to enhance …

Examining CNN representations with respect to dataset bias

Q Zhang, W Wang, SC Zhu - Proceedings of the AAAI conference on …, 2018 - ojs.aaai.org
Given a pre-trained CNN without any testing samples, this paper proposes a simple yet
effective method to diagnose feature representations of the CNN. We aim to discover …

Introspective perception: Learning to predict failures in vision systems

S Daftry, S Zeng, JA Bagnell… - 2016 IEEE/RSJ …, 2016 - ieeexplore.ieee.org
As robots aspire for long-term autonomous operations in complex dynamic environments,
the ability to reliably take mission-critical decisions in ambiguous situations becomes critical …

[HTML][HTML] Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?

M Tešić, U Hahn - Patterns, 2022 - cell.com
Counterfactual (CF) explanations have been employed as one of the modes of explainability
in explainable artificial intelligence (AI)—both to increase the transparency of AI systems …