Efficient large scale linear programming support vector machines

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

This paper presents a decomposition method for efficiently constructing ℓ1-norm Support Vector Machines (SVMs). The decomposition algorithm introduced in this paper possesses many desirable properties. For example, it is provably convergent, scales well to large datasets, is easy to implement, and can be extended to handle support vector regression and other SVM variants. We demonstrate the efficiency of our algorithm by training on (dense) synthetic datasets of sizes up to 20 million points (in ℝ32). The results show our algorithm to be several orders of magnitude faster than a previously published method for the same task. We also present experimental results on real data sets - our method is seen to be not only very fast, but also highly competitive against the leading SVM implementations.

Original languageEnglish
Title of host publicationMachine Learning
Subtitle of host publicationECML 2006 - 17th European Conference on Machine Learning, Proceedings
PublisherSpringer Verlag
Pages767-774
Number of pages8
ISBN (Print)354045375X, 9783540453758
DOIs
StatePublished - 2006
Externally publishedYes
Event17th European Conference on Machine Learning, ECML 2006 - Berlin, Germany
Duration: 18 Sep 200622 Sep 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4212 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th European Conference on Machine Learning, ECML 2006
Country/TerritoryGermany
CityBerlin
Period18/09/0622/09/06

Fingerprint

Dive into the research topics of 'Efficient large scale linear programming support vector machines'. Together they form a unique fingerprint.

Cite this