A simple proof of the convergence of the SMO algorithm for linearly separable problems

J López, JR Dorronsoro - Artificial Neural Networks–ICANN 2009: 19th …, 2009 - Springer
We give a new proof of the convergence of the SMO algorithm for SVM training over linearly
separable problems that partly builds on the one by Mitchell et al. for the convergence of the …

An efficient active set method for SVM training without singular inner problems

C Sentelle, GC Anagnostopoulos… - … Joint Conference on …, 2009 - ieeexplore.ieee.org
Efficiently implemented active set methods have been successfully applied to support vector
machine (SVM) training. These active set methods offer higher precision and incremental …

A convergent hybrid decomposition algorithm model for SVM training

S Lucidi, L Palagi, A Risi… - IEEE Transactions on …, 2009 - ieeexplore.ieee.org
Training of support vector machines (SVMs) requires to solve a linearly constrained convex
quadratic problem. In real applications, the number of training data may be very huge and …

The multiple pairs SMO: A modified SMO algorithm for the acceleration of the SVM training

RA Hernandez, M Strum, WJ Chau… - … Joint Conference on …, 2009 - ieeexplore.ieee.org
The sequential minimal optimization (SMO) algorithm is known to be one of the most efficient
solutions for the support vector machine training phase. It solves a quadratic programming …

On the equivalence of the SMO and MDM algorithms for SVM training

J López, Á Barbero, JR Dorronsoro - … 15-19, 2008, Proceedings, Part I 19, 2008 - Springer
SVM training is usually discussed under two different algorithmic points of view. The first one
is provided by decomposition methods such as SMO and SVMLight while the second one …

A common framework for the convergence of the GSK, MDM and SMO algorithms

J López, JR Dorronsoro - International Conference on Artificial Neural …, 2010 - Springer
Building upon Gilbert's convergence proof of his algorithtm to solve the Minimum Norm
Problem, we establish a framework where a much simplified version of his proof allows us to …

Simple clipping algorithms for reduced convex hull SVM training

J López, Á Barbero, JR Dorronsoro - International Workshop on Hybrid …, 2008 - Springer
It is well known that linear slack penalty SVM training is equivalent to solving the Nearest
Point Problem (NPP) over the so-called μ-Reduced Convex Hulls, that is, convex …

Nesterov acceleration for the smo algorithm

A Torres-Barrán, JR Dorronsoro - … Barcelona, Spain, September 6-9, 2016 …, 2016 - Springer
Abstract We revise Nesterov's Accelerated Gradient (NAG) procedure for the SVM dual
problem and propose a strictly monotone version of NAG that is capable of accelerating the …

Rigorous proof of termination of SMO algorithm for support vector machines

N Takahashi, T Nishi - IEEE Transactions on Neural Networks, 2005 - ieeexplore.ieee.org
Sequential minimal optimization (SMO) algorithm is one of the simplest decomposition
methods for learning of support vector machines (SVMs). Keerthi and Gilbert have recently …

A 4–vector mdm algorithm for support vector training

Á Barbero, J López, JR Dorronsoro - International Conference on Artificial …, 2008 - Springer
While usually SVM training tries to solve the dual of the standard SVM minimization problem,
alternative algorithms that solve the Nearest Point Problem (NPP) for the convex hulls of the …