Online portfolio selection: A survey

B Li, SCH Hoi - ACM Computing Surveys (CSUR), 2014 - dl.acm.org
Online portfolio selection is a fundamental problem in computational finance, which has
been extensively studied across several research communities, including finance, statistics …

A modern introduction to online learning

F Orabona - arXiv preprint arXiv:1912.13213, 2019 - arxiv.org
In this monograph, I introduce the basic concepts of Online Learning through a modern view
of Online Convex Optimization. Here, online learning refers to the framework of regret …

Train faster, generalize better: Stability of stochastic gradient descent

M Hardt, B Recht, Y Singer - International conference on …, 2016 - proceedings.mlr.press
We show that parametric models trained by a stochastic gradient method (SGM) with few
iterations have vanishing generalization error. We prove our results by arguing that SGM is …

Introduction to online convex optimization

E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …

Chomp: Covariant hamiltonian optimization for motion planning

M Zucker, N Ratliff, AD Dragan… - … journal of robotics …, 2013 - journals.sagepub.com
In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning),
a method for trajectory optimization invariant to reparametrization. CHOMP uses functional …

A reduction of imitation learning and structured prediction to no-regret online learning

S Ross, G Gordon, D Bagnell - Proceedings of the fourteenth …, 2011 - proceedings.mlr.press
Sequential prediction problems such as imitation learning, where future observations
depend on previous predictions (actions), violate the common iid assumptions made in …

[PDF][PDF] Adaptive subgradient methods for online learning and stochastic optimization.

J Duchi, E Hazan, Y Singer - Journal of machine learning research, 2011 - jmlr.org
We present a new family of subgradient methods that dynamically incorporate knowledge of
the geometry of the data observed in earlier iterations to perform more informative gradient …

Communication-efficient algorithms for statistical optimization

Y Zhang, MJ Wainwright… - Advances in neural …, 2012 - proceedings.neurips.cc
We study two communication-efficient algorithms for distributed statistical optimization on
large-scale data. The first algorithm is an averaging method that distributes the $ N $ data …

CHOMP: Gradient optimization techniques for efficient motion planning

N Ratliff, M Zucker, JA Bagnell… - 2009 IEEE international …, 2009 - ieeexplore.ieee.org
Existing high-dimensional motion planning algorithms are simultaneously overpowered and
underpowered. In domains sparsely populated by obstacles, the heuristics used by …

Pegasos: Primal estimated sub-gradient solver for svm

S Shalev-Shwartz, Y Singer, N Srebro - Proceedings of the 24th …, 2007 - dl.acm.org
We describe and analyze a simple and effective iterative algorithm for solving the
optimization problem cast by Support Vector Machines (SVM). Our method alternates …