Efficient algorithms for smooth minimax optimization

KK Thekumparampil, P Jain… - Advances in Neural …, 2019 - proceedings.neurips.cc
This paper studies first order methods for solving smooth minimax optimization problems
$\min_x\max_y g (x, y) $ where $ g (\cdot,\cdot) $ is smooth and $ g (x,\cdot) $ is concave for …

Decentralized policy gradient descent ascent for safe multi-agent reinforcement learning

S Lu, K Zhang, T Chen, T Başar, L Horesh - Proceedings of the AAAI …, 2021 - ojs.aaai.org
This paper deals with distributed reinforcement learning problems with safety constraints. In
particular, we consider that a team of agents cooperate in a shared environment, where …

An inexact augmented Lagrangian framework for nonconvex optimization with nonlinear constraints

MF Sahin, A Alacaoglu, F Latorre… - Advances in Neural …, 2019 - proceedings.neurips.cc
We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex
problems with nonlinear constraints. We characterize the total computational complexity of …

Escaping from saddle points on Riemannian manifolds

Y Sun, N Flammarion, M Fazel - Advances in Neural …, 2019 - proceedings.neurips.cc
We consider minimizing a nonconvex, smooth function $ f $ on a Riemannian manifold
$\mathcal {M} $. We show that a perturbed version of the gradient descent algorithm …

Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization

Q Lin, R Ma, Y Xu - Computational optimization and applications, 2022 - Springer
In this paper, an inexact proximal-point penalty method is studied for constrained
optimization problems, where the objective function is non-convex, and the constraint …

Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints

J Zeng, W Yin, DX Zhou - Journal of Scientific Computing, 2022 - Springer
The augmented Lagrangian method (ALM) is one of the most useful methods for constrained
optimization. Its convergence has been well established under convexity assumptions or …

Finding second-order stationary points efficiently in smooth nonconvex linearly constrained optimization problems

S Lu, M Razaviyayn, B Yang… - Advances in Neural …, 2020 - proceedings.neurips.cc
This paper proposes two efficient algorithms for computing approximate second-order
stationary points (SOSPs) of problems with generic smooth non-convex objective functions …

A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees

M O'Neill, SJ Wright - IMA Journal of Numerical Analysis, 2021 - academic.oup.com
We describe an algorithm based on a logarithmic barrier function, Newton's method and
linear conjugate gradients that seeks an approximate minimizer of a smooth function over …

From the simplex to the sphere: faster constrained optimization using the Hadamard parametrization

Q Li, D McKenzie, W Yin - … and Inference: A Journal of the IMA, 2023 - academic.oup.com
The standard simplex in, also known as the probability simplex, is the set of nonnegative
vectors whose entries sum up to 1. It frequently appears as a constraint in optimization …

[PDF][PDF] Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality

D Davis, D Drusvyatskiy… - arXiv preprint arXiv …, 2021 - optimization-online.org
Nonsmooth optimization problems arising in practice, whether in signal processing,
statistical estimation, or modern machine learning, tend to exhibit beneficial smooth …