作者
Aryan Mokhtari, Hamed Hassani, Amin Karbasi
发表日期
2020
期刊
Journal of machine learning research
卷号
21
期号
105
页码范围
1-49
简介
This paper considers stochastic optimization problems for a large class of objective functions, including convex and continuous submodular. Stochastic proximal gradient methods have been widely used to solve such problems; however, their applicability remains limited when the problem dimension is large and the projection onto a convex set is computationally costly. Instead, stochastic conditional gradient algorithms are proposed as an alternative solution which rely on (i) Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration; (ii) Solving a linear program to compute the descent/ascent direction. The gradient averaging technique reduces the noise of gradient approximations as time progresses, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration. We show that under convexity and smoothness assumptions, our proposed stochastic conditional gradient method converges to the optimal objective function value at a sublinear rate of O(1/t1/3). Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a ((1 - 1/e)OPT - ε) guarantee (in expectation) with O(1/ε3) stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. Additionally, we achieve ((1/e)OPT - ε) guarantee after operating on O(1/ε3) stochastic gradients for the case that the objective function is continuous DR-submodular but …
引用总数
201920202021202220232024102228252017
学术搜索中的文章