Towards evaluating the robustness of neural networks N Carlini, D Wagner 2017 IEEE Symposium on Security and Privacy (SP), 39-57, 2017 | 9393 | 2017 |
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples A Athalye, N Carlini, D Wagner ICML 2018, 2018 | 3398 | 2018 |
Mixmatch: A holistic approach to semi-supervised learning D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, CA Raffel Advances in Neural Information Processing Systems, 5050-5060, 2019 | 3300 | 2019 |
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence K Sohn, D Berthelot, CL Li, Z Zhang, N Carlini, ED Cubuk, A Kurakin, ... arXiv preprint arXiv:2001.07685, 2020 | 3268 | 2020 |
Adversarial examples are not easily detected: Bypassing ten detection methods N Carlini, D Wagner Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security …, 2017 | 1977 | 2017 |
Extracting training data from large language models N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ... 30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021 | 1346 | 2021 |
Audio adversarial examples: Targeted attacks on speech-to-text N Carlini, D Wagner 2018 IEEE Security and Privacy Workshops (SPW), 1-7, 2018 | 1258 | 2018 |
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks N Carlini, C Liu, J Kos, Ú Erlingsson, D Song | 1239* | 2019 |
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring D Berthelot, N Carlini, ED Cubuk, A Kurakin, K Sohn, H Zhang, C Raffel arXiv preprint arXiv:1911.09785, 2019 | 1111 | 2019 |
On Evaluating Adversarial Robustness N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ... arXiv preprint arXiv:1902.06705, 2019 | 952 | 2019 |
On adaptive attacks to adversarial example defenses F Tramer, N Carlini, W Brendel, A Madry Advances in Neural Information Processing Systems 33, 1633-1645, 2020 | 850 | 2020 |
Hidden Voice Commands. N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, ... USENIX Security Symposium, 513-530, 2016 | 762 | 2016 |
cleverhans v2. 0.0: an adversarial machine learning library N Papernot, N Carlini, I Goodfellow, R Feinman, F Faghri, A Matyasko, ... arXiv preprint arXiv:1610.00768, 2016 | 717* | 2016 |
Control-flow bending: On the effectiveness of control-flow integrity N Carlini, A Barresi, M Payer, D Wagner, TR Gross 24th {USENIX} Security Symposium ({USENIX} Security 15), 161-176, 2015 | 558 | 2015 |
Provably minimally-distorted adversarial examples N Carlini, G Katz, C Barrett, DL Dill arXiv preprint arXiv:1709.10207, 2017 | 544* | 2017 |
Measuring Robustness to Natural Distribution Shifts in Image Classification R Taori, A Dave, V Shankar, N Carlini, B Recht, L Schmidt arXiv preprint arXiv:2007.00644, 2020 | 507 | 2020 |
{ROP} is Still Dangerous: Breaking Modern Defenses N Carlini, D Wagner 23rd {USENIX} Security Symposium ({USENIX} Security 14), 385-399, 2014 | 506 | 2014 |
Universal and transferable adversarial attacks on aligned language models A Zou, Z Wang, N Carlini, M Nasr, JZ Kolter, M Fredrikson arXiv preprint arXiv:2307.15043, 2023 | 460 | 2023 |
Imperceptible, robust, and targeted adversarial examples for automatic speech recognition Y Qin, N Carlini, G Cottrell, I Goodfellow, C Raffel International Conference on Machine Learning, 5231-5240, 2019 | 456 | 2019 |
Membership inference attacks from first principles N Carlini, S Chien, M Nasr, S Song, A Terzis, F Tramer 2022 IEEE Symposium on Security and Privacy (SP), 1897-1914, 2022 | 441 | 2022 |