Convergence to second-order stationarity for constrained non-convex optimization

M Nouiehed, JD Lee, M Razaviyayn - arXiv preprint arXiv:1810.02024, 2018 - arxiv.org
arXiv preprint arXiv:1810.02024, 2018arxiv.org
We consider the problem of finding an approximate second-order stationary point of a
constrained non-convex optimization problem. We first show that, unlike the gradient
descent method for unconstrained optimization, the vanilla projected gradient descent
algorithm may converge to a strict saddle point even when there is only a single linear
constraint. We then provide a hardness result by showing that checking $(\epsilon_g,\
epsilon_H) $-second order stationarity is NP-hard even in the presence of linear constraints …
We consider the problem of finding an approximate second-order stationary point of a constrained non-convex optimization problem. We first show that, unlike the gradient descent method for unconstrained optimization, the vanilla projected gradient descent algorithm may converge to a strict saddle point even when there is only a single linear constraint. We then provide a hardness result by showing that checking -second order stationarity is NP-hard even in the presence of linear constraints. Despite our hardness result, we identify instances of the problem for which checking second order stationarity can be done efficiently. For such instances, we propose a dynamic second order Frank--Wolfe algorithm which converges to ()-second order stationary points in iterations. The proposed algorithm can be used in general constrained non-convex optimization as long as the constrained quadratic sub-problem can be solved efficiently.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果