Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments

Y Wang, SS Zhan, R Jiao, Z Wang… - International …, 2023 - proceedings.mlr.press
It is quite challenging to ensure the safety of reinforcement learning (RL) agents in an
unknown and stochastic environment under hard constraints that require the system state …

Verisig 2.0: Verification of neural network controllers using taylor model preconditioning

R Ivanov, T Carpenter, J Weimer, R Alur… - … on Computer Aided …, 2021 - Springer
Abstract This paper presents Verisig 2.0, a verification tool for closed-loop systems with
neural network (NN) controllers. We focus on NNs with tanh/sigmoid activations and develop …

NNV 2.0: the neural network verification tool

DM Lopez, SW Choi, HD Tran, TT Johnson - International Conference on …, 2023 - Springer
This manuscript presents the updated version of the Neural Network Verification (NNV) tool.
NNV is a formal verification software tool for deep learning models and cyber-physical …

Reachability analysis of neural feedback loops

M Everett, G Habibi, C Sun, JP How - IEEE Access, 2021 - ieeexplore.ieee.org
Neural Networks (NNs) can provide major empirical performance improvements for closed-
loop systems, but they also introduce challenges in formally analyzing those systems' safety …

Open-and closed-loop neural network verification using polynomial zonotopes

N Kochdumper, C Schilling, M Althoff, S Bak - NASA Formal Methods …, 2023 - Springer
We present a novel approach to efficiently compute tight non-convex enclosures of the
image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation …

Verification of neural-network control systems by integrating Taylor models and zonotopes

C Schilling, M Forets, S Guadalupe - … of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
We study the verification problem for closed-loop dynamical systems with neural-network
controllers (NNCS). This problem is commonly reduced to computing the set of reachable …

Polar: A polynomial arithmetic framework for verifying neural-network controlled systems

C Huang, J Fan, X Chen, W Li, Q Zhu - International Symposium on …, 2022 - Springer
We present POLAR (The source code can be found at https://github. com/ChaoHuang2018/
POLAR_Tool. The full version of this paper can be found at https://arxiv …

Trainify: A CEGAR-Driven Training and Verification Framework for Safe Deep Reinforcement Learning

P Jin, J Tian, D Zhi, X Wen, M Zhang - International Conference on …, 2022 - Springer
Abstract Deep Reinforcement Learning (DRL) has demonstrated its strength in developing
intelligent systems. These systems shall be formally guaranteed to be trustworthy when …

Reachability analysis of neural network control systems

C Zhang, W Ruan, P Xu - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-
physical systems. Despite the various verification approaches for neural networks, the safety …

Verifying controllers with vision-based perception using safe approximate abstractions

C Hsieh, Y Li, D Sun, K Joshi… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Fully formal verification of perception models is likely to remain challenging in the
foreseeable future, and yet these models are being integrated into safety-critical control …