Interpretation of neural networks is fragile

A Ghorbani, A Abid, J Zou - Proceedings of the AAAI conference on artificial …, 2019 - aaai.org
Proceedings of the AAAI conference on artificial intelligence, 2019aaai.org
In order for machine learning to be trusted in many applications, it is critical to be able to
reliably explain why the machine learning algorithm makes certain predictions. For this
reason, a variety of methods have been developed recently to interpret neural network
predictions by providing, for example, feature importance maps. For both scientific
robustness and security reasons, it is important to know to what extent can the
interpretations be altered by small systematic perturbations to the input data, which might be …
Abstract
In order for machine learning to be trusted in many applications, it is critical to be able to reliably explain why the machine learning algorithm makes certain predictions. For this reason, a variety of methods have been developed recently to interpret neural network predictions by providing, for example, feature importance maps. For both scientific robustness and security reasons, it is important to know to what extent can the interpretations be altered by small systematic perturbations to the input data, which might be generated by adversaries or by measurement biases. In this paper, we demonstrate how to generate adversarial perturbations that produce perceptively indistinguishable inputs that are assigned the same predicted label, yet have very different interpretations. We systematically characterize the robustness of interpretations generated by several widely-used feature importance interpretation methods (feature importance maps, integrated gradients, and DeepLIFT) on ImageNet and CIFAR-10. In all cases, our experiments show that systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (eg influence functions) are similarly susceptible to adversarial attack. Our analysis of the geometry of the Hessian matrix gives insight on why robustness is a general challenge to current interpretation approaches.
aaai.org
以上显示的是最相近的搜索结果。 查看全部搜索结果