In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black …
T Lin, Z Zheng, M Jordan - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of …
T Pang, HJT Suh, L Yang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
The empirical success of reinforcement learning (RL) in contact-rich manipulation leaves much to be understood from a model-based perspective, where the key difficulties are often …
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonconvex functions. In particular, we study the class of Hadamard semi-differentiable …
We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions by producing $(\delta,\epsilon) $-Goldstein stationary points. Several recent works have …
The empirical success of derivative-free methods in reinforcement learning for planning through contact seems at odds with the perceived fragility of classical gradient-based …
Abstract Zhang et al.(ICML 2020) introduced a novel modification of Goldstein's classical subgradient method, with an efficiency guarantee of $ O (\varepsilon^{-4}) $ for minimizing …
X Guo, D Keivan, G Dullerud… - Advances in Neural …, 2024 - proceedings.neurips.cc
The applications of direct policy search in reinforcement learning and continuous control have received increasing attention. In this work, we present novel theoretical results on the …
L Tian, K Zhou, AMC So - International Conference on …, 2022 - proceedings.mlr.press
We report a practical finite-time algorithmic scheme to compute approximately stationary points for nonconvex nonsmooth Lipschitz functions. In particular, we are interested in two …