An implementation of training dual-nu support vector machines

dc.contributor.authorChew, H.
dc.contributor.authorLim, C.
dc.contributor.authorBogner, R.
dc.contributor.editorQi, L.
dc.contributor.editorTeo, K.
dc.contributor.editorYang, X.
dc.date.issued2005
dc.descriptionThe original publication is available at www.springerlink.com
dc.description.abstractDual-ν Support Vector Machine (2ν-SVM) is a SVM extension that reduces the complexity of selecting the right value of the error parameter selection. However, the techniques used for solving the training problem of the original SVM cannot be directly applied to 2ν-SVM. An iterative decomposition method for training this class of SVM is described in this chapter. The training is divided into the initialisation process and the optimisation process, with both processes using similar iterative techniques. Implementation issues, such as caching, which reduces the memory usage and redundant kernel calculations are discussed.
dc.description.statementofresponsibilityHong-Gunn Chew, Cheng-Chew Lim and Robert E. Bogner
dc.identifier.citationApplied optimization - Optimization and control with applications, 2005 / Qi, L., Teo, K., Yang, X. (ed./s), vol.96, pp.157-182
dc.identifier.doi10.1007/0-387-24255-4_7
dc.identifier.isbn0387242546
dc.identifier.orcidChew, H. [0000-0001-6525-574X]
dc.identifier.orcidLim, C. [0000-0002-2463-9760]
dc.identifier.urihttp://hdl.handle.net/2440/30009
dc.language.isoen
dc.publisherSpringer
dc.publisher.placeNew York, USA
dc.relation.ispartofseriesApplied optimization ; 96
dc.source.urihttp://www.springerlink.com/content/l157014rr76j1j5p/
dc.titleAn implementation of training dual-nu support vector machines
dc.typeBook chapter
pubs.publication-statusPublished

Files