Sok: Certified robustness for deep neural networks

L Li, T Xie, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Marabou 2.0: a versatile formal analyzer of neural networks

H Wu, O Isac, A Zeljić, T Tagomori, M Daggitt… - … on Computer Aided …, 2024 - Springer
Marabou 2.0: A Versatile Formal Analyzer of Neural Networks | SpringerLink Skip to main
content Advertisement SpringerLink Account Menu Find a journal Publish with us Track your …

Verification of image-based neural network controllers using generative models

SM Katz, AL Corso, CA Strong… - Journal of Aerospace …, 2022 - arc.aiaa.org
Although neural networks are effective tools for processing information from image-based
sensors to produce control actions, their complex nature limits their use in safety-critical …

Toward certified robustness against real-world distribution shifts

H Wu, T Tagomori, A Robey, F Yang… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
We consider the problem of certifying the robustness of deep neural networks against real-
world distribution shifts. To do so, we bridge the gap between hand-crafted specifications …

From robustness to explainability and back again

X Huang, J Marques-Silva - arXiv preprint arXiv:2306.03048, 2023 - arxiv.org
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal
explainability offers important guarantees of rigor. However, formal explainability is hindered …

Latent space smoothing for individually fair representations

M Peychev, A Ruoss, M Balunović, M Baader… - … on Computer Vision, 2022 - Springer
Fair representation learning transforms user data into a representation that ensures fairness
and utility regardless of the downstream application. However, learning individually fair …

Mathematical algorithm design for deep learning under societal and judicial constraints: The algorithmic transparency requirement

H Boche, A Fono, G Kutyniok - arXiv preprint arXiv:2401.10310, 2024 - arxiv.org
Deep learning still has drawbacks in terms of trustworthiness, which describes a
comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear …

Make sure you're unsure: A framework for verifying probabilistic specifications

L Berrada, S Dathathri, K Dvijotham… - Advances in …, 2021 - proceedings.neurips.cc
Most real world applications require dealing with stochasticity like sensor noise or predictive
uncertainty, where formal specifications of desired behavior are inherently probabilistic …

A mathematical framework for computability aspects of algorithmic transparency

H Boche, A Fono, G Kutyniok - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
The lack of trustworthiness is a major downside of deep learning. To mitigate the associated
risks clear obligations of deep learning models have been proposed via regulatory …

Precise and generalized robustness certification for neural networks

Y Yuan, S Wang, Z Su - 32nd USENIX Security Symposium (USENIX …, 2023 - usenix.org
The objective of neural network (NN) robustness certification is to determine if a NN changes
its predictions when mutations are made to its inputs. While most certification research …