Physics-informed neural networks (PINNs) are infamous for being hard to train. Recently, second-order methods based on natural gradient and Gauss-Newton methods have shown …
R Balestriero, Y LeCun - ICASSP 2023-2023 IEEE International …, 2023 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) outshine alternative function approximators in many settings thanks to their modularity in composing any desired differentiable operator. The formed …
H Sun, Y Shi, J Wang, HD Tuan, HV Poor… - arXiv preprint arXiv …, 2022 - arxiv.org
The idea of embedding optimization problems into deep neural networks as optimization layers to encode constraints and inductive priors has taken hold in recent years. Most …
P Ablin, G Peyré - International Conference on Artificial …, 2022 - proceedings.mlr.press
We consider the problem of minimizing a function over the manifold of orthogonal matrices. The majority of algorithms for this problem compute a direction in the tangent space, and …
In this work, we propose a concise neural operator architecture for operator learning. Drawing an analogy with a conventional fully connected neural network, we define the …
This paper presents RAYEN, a framework to impose hard convex constraints on the output or latent variable of a neural network. RAYEN guarantees that, for any input or any weights …
J Tong, J Cai, T Serra - International Conference on the Integration of …, 2024 - Springer
Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification …
In this paper, we introduce McTorch, a manifold optimization library for deep learning that extends PyTorch. It aims to lower the barrier for users wishing to use manifold constraints in …
Y Xu, S Sivaranjani - arXiv preprint arXiv:2404.04375, 2024 - arxiv.org
The Lipschitz constant plays a crucial role in certifying the robustness of neural networks to input perturbations and adversarial attacks, as well as the stability and safety of systems with …