R Gu, C Niu, F Wu, G Chen, C Hu, C Lyu… - ACM Computing Surveys …, 2021 - dl.acm.org
In recent years, mobile devices have gained increasing development with stronger computation capability and larger storage space. Some of the computation-intensive …
We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a …
In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose …
CY Hsia, WL Chiang, CJ Lin - Asian Conference on Machine …, 2018 - proceedings.mlr.press
Truncated Newton method is one of the most effective optimization methods for large-scale linear classification. The main computational task at each Newton iteration is to …
We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we …
Z Chen, L Luo, Z Zhang - Proceedings of the AAAI Conference on …, 2017 - ojs.aaai.org
Recently, there has been an increasing interest in designing distributed convex optimization algorithms under the setting where the data matrix is partitioned on features. Algorithms …
In this work1, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose …
In this paper, we propose a distributed damped Newton method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory …