作者
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev
发表日期
2018/5/20
研讨会论文
IEEE Symposium on Security and Privacy (SP) 2018
出版商
IEEE
简介
We present AI 2 , the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI 2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers. We present a complete implementation of AI 2 together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI 2 is precise enough to prove …
引用总数
20182019202020212022202320243611616817019021498
学术搜索中的文章
T Gehr, M Mirman, D Drachsler-Cohen, P Tsankov… - 2018 IEEE symposium on security and privacy (SP), 2018