Neural fields with hard constraints of arbitrary differential order

F Zhong, K Fogarty, P Hanji, T Wu, A Sztrajman… - arXiv preprint arXiv …, 2023 - arxiv.org
While deep learning techniques have become extremely popular for solving a broad range
of optimization problems, methods to enforce hard constraints during optimization …

Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks

F Dangel, J Müller, M Zeinhofer - arXiv preprint arXiv:2405.15603, 2024 - arxiv.org
Physics-informed neural networks (PINNs) are infamous for being hard to train. Recently,
second-order methods based on natural gradient and Gauss-Newton methods have shown …

POLICE: Provably optimal linear constraint enforcement for deep neural networks

R Balestriero, Y LeCun - ICASSP 2023-2023 IEEE International …, 2023 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) outshine alternative function approximators in many settings
thanks to their modularity in composing any desired differentiable operator. The formed …

Alternating differentiation for optimization layers

H Sun, Y Shi, J Wang, HD Tuan, HV Poor… - arXiv preprint arXiv …, 2022 - arxiv.org
The idea of embedding optimization problems into deep neural networks as optimization
layers to encode constraints and inductive priors has taken hold in recent years. Most …

Fast and accurate optimization on the orthogonal manifold without retraction

P Ablin, G Peyré - International Conference on Artificial …, 2022 - proceedings.mlr.press
We consider the problem of minimizing a function over the manifold of orthogonal matrices.
The majority of algorithms for this problem compute a direction in the tangent space, and …

MgNO: Efficient parameterization of linear operators via multigrid

J He, X Liu, J Xu - arXiv preprint arXiv:2310.19809, 2023 - arxiv.org
In this work, we propose a concise neural operator architecture for operator learning.
Drawing an analogy with a conventional fully connected neural network, we define the …

Rayen: Imposition of hard convex constraints on neural networks

J Tordesillas, JP How, M Hutter - arXiv preprint arXiv:2307.08336, 2023 - arxiv.org
This paper presents RAYEN, a framework to impose hard convex constraints on the output
or latent variable of a neural network. RAYEN guarantees that, for any input or any weights …

Optimization over Trained Neural Networks: Taking a Relaxing Walk

J Tong, J Cai, T Serra - International Conference on the Integration of …, 2024 - Springer
Besides training, mathematical optimization is also used in deep learning to model and
solve formulations over trained neural networks for purposes such as verification …

McTorch, a manifold optimization library for deep learning

M Meghwanshi, P Jawanpuria, A Kunchukuttan… - arXiv preprint arXiv …, 2018 - arxiv.org
In this paper, we introduce McTorch, a manifold optimization library for deep learning that
extends PyTorch. It aims to lower the barrier for users wishing to use manifold constraints in …

Compositional Estimation of Lipschitz Constants for Deep Neural Networks

Y Xu, S Sivaranjani - arXiv preprint arXiv:2404.04375, 2024 - arxiv.org
The Lipschitz constant plays a crucial role in certifying the robustness of neural networks to
input perturbations and adversarial attacks, as well as the stability and safety of systems with …