作者
Yuval Shapira, Eran Avneri, Dana Drachsler-Cohen
发表日期
2023/4/6
期刊
Proceedings of the ACM on Programming Languages
卷号
7
期号
OOPSLA1
页码范围
434-461
出版商
ACM
简介
While successful, neural networks have been shown to be vulnerable to adversarial example attacks. In L0 adversarial attacks, also known as few-pixel attacks, the attacker picks t pixels from the image and arbitrarily perturbs them. To understand the robustness level of a network to these attacks, it is required to check the robustness of the network to perturbations of every set of t pixels. Since the number of sets is exponentially large, existing robustness verifiers, which can reason about a single set of pixels at a time, are impractical for L0 robustness verification. We introduce Calzone, an L0 robustness verifier for neural networks. To the best of our knowledge, Calzone is the first to provide a sound and complete analysis for L0 adversarial attacks. Calzone builds on the following observation: if a classifier is robust to any perturbation of a set of k pixels, for k>t, then it is robust to any perturbation of its subsets of size t …
引用总数
学术搜索中的文章
Y Shapira, E Avneri, D Drachsler-Cohen - Proceedings of the ACM on Programming Languages, 2023