First three years of the international verification of neural networks competition (VNN-COMP)

C Brix, MN Müller, S Bak, TT Johnson, C Liu - International Journal on …, 2023 - Springer
This paper presents a summary and meta-analysis of the first three iterations of the annual
International Verification of Neural Networks Competition (VNN-COMP), held in 2020, 2021 …

Sok: Certified robustness for deep neural networks

L Li, T Xie, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Generative ai and process systems engineering: The next frontier

B Decardi-Nelson, AS Alshehri, A Ajagekar… - Computers & Chemical …, 2024 - Elsevier
This review article explores how emerging generative artificial intelligence (GenAI) models,
such as large language models (LLMs), can enhance solution methodologies within process …

Certified training: Small boxes are all you need

MN Müller, F Eckert, M Fischer, M Vechev - arXiv preprint arXiv …, 2022 - arxiv.org
To obtain, deterministic guarantees of adversarial robustness, specialized training methods
are used. We propose, SABR, a novel such certified training method, based on the key …

Efficiently computing local lipschitz constants of neural networks via bound propagation

Z Shi, Y Wang, H Zhang, JZ Kolter… - Advances in Neural …, 2022 - proceedings.neurips.cc
Lipschitz constants are connected to many properties of neural networks, such as
robustness, fairness, and generalization. Existing methods for computing Lipschitz constants …

The third international verification of neural networks competition (VNN-COMP 2022): Summary and results

MN Müller, C Brix, S Bak, C Liu, TT Johnson - arXiv preprint arXiv …, 2022 - arxiv.org
This report summarizes the 3rd International Verification of Neural Networks Competition
(VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled …

Connecting certified and adversarial training

Y Mao, M Müller, M Fischer… - Advances in Neural …, 2024 - proceedings.neurips.cc
Training certifiably robust neural networks remains a notoriously hard problem. While
adversarial training optimizes under-approximations of the worst-case loss, which leads to …

The fourth international verification of neural networks competition (VNN-COMP 2023): Summary and results

C Brix, S Bak, C Liu, TT Johnson - arXiv preprint arXiv:2312.16760, 2023 - arxiv.org
This report summarizes the 4th International Verification of Neural Networks Competition
(VNN-COMP 2023), held as a part of the 6th Workshop on Formal Methods for ML-Enabled …

Exact verification of relu neural control barrier functions

H Zhang, J Wu, Y Vorobeychik… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Control Barrier Functions (CBFs) are a popular approach for safe control of
nonlinear systems. In CBF-based control, the desired safety properties of the system are …

Marabou 2.0: A versatile formal analyzer of neural networks

H Wu, O Isac, A Zeljić, T Tagomori, M Daggitt… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv:2401.14461v1 [cs.AI] 25 Jan 2024 Page 1 Marabou 2.0: A Versatile Formal Analyzer of
Neural Networks Haoze Wu1, Omri Isac2, Aleksandar Zeljic1, Teruhiro Tagomori1,3, Matthew …