This paper deals with distributed reinforcement learning problems with safety constraints. In particular, we consider that a team of agents cooperate in a shared environment, where …
MF Sahin, A Alacaoglu, F Latorre… - Advances in Neural …, 2019 - proceedings.neurips.cc
We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of …
We consider minimizing a nonconvex, smooth function $ f $ on a Riemannian manifold $\mathcal {M} $. We show that a perturbed version of the gradient descent algorithm …
Q Lin, R Ma, Y Xu - Computational optimization and applications, 2022 - Springer
In this paper, an inexact proximal-point penalty method is studied for constrained optimization problems, where the objective function is non-convex, and the constraint …
J Zeng, W Yin, DX Zhou - Journal of Scientific Computing, 2022 - Springer
The augmented Lagrangian method (ALM) is one of the most useful methods for constrained optimization. Its convergence has been well established under convexity assumptions or …
This paper proposes two efficient algorithms for computing approximate second-order stationary points (SOSPs) of problems with generic smooth non-convex objective functions …
M O'Neill, SJ Wright - IMA Journal of Numerical Analysis, 2021 - academic.oup.com
We describe an algorithm based on a logarithmic barrier function, Newton's method and linear conjugate gradients that seeks an approximate minimizer of a smooth function over …
Q Li, D McKenzie, W Yin - … and Inference: A Journal of the IMA, 2023 - academic.oup.com
The standard simplex in, also known as the probability simplex, is the set of nonnegative vectors whose entries sum up to 1. It frequently appears as a constraint in optimization …
Nonsmooth optimization problems arising in practice, whether in signal processing, statistical estimation, or modern machine learning, tend to exhibit beneficial smooth …