An implementation of training dual-nu support vector machines
dc.contributor.author | Chew, H. | |
dc.contributor.author | Lim, C. | |
dc.contributor.author | Bogner, R. | |
dc.contributor.editor | Qi, L. | |
dc.contributor.editor | Teo, K. | |
dc.contributor.editor | Yang, X. | |
dc.date.issued | 2005 | |
dc.description | The original publication is available at www.springerlink.com | |
dc.description.abstract | Dual-ν Support Vector Machine (2ν-SVM) is a SVM extension that reduces the complexity of selecting the right value of the error parameter selection. However, the techniques used for solving the training problem of the original SVM cannot be directly applied to 2ν-SVM. An iterative decomposition method for training this class of SVM is described in this chapter. The training is divided into the initialisation process and the optimisation process, with both processes using similar iterative techniques. Implementation issues, such as caching, which reduces the memory usage and redundant kernel calculations are discussed. | |
dc.description.statementofresponsibility | Hong-Gunn Chew, Cheng-Chew Lim and Robert E. Bogner | |
dc.identifier.citation | Applied optimization - Optimization and control with applications, 2005 / Qi, L., Teo, K., Yang, X. (ed./s), vol.96, pp.157-182 | |
dc.identifier.doi | 10.1007/0-387-24255-4_7 | |
dc.identifier.isbn | 0387242546 | |
dc.identifier.orcid | Chew, H. [0000-0001-6525-574X] | |
dc.identifier.orcid | Lim, C. [0000-0002-2463-9760] | |
dc.identifier.uri | http://hdl.handle.net/2440/30009 | |
dc.language.iso | en | |
dc.publisher | Springer | |
dc.publisher.place | New York, USA | |
dc.relation.ispartofseries | Applied optimization ; 96 | |
dc.source.uri | http://www.springerlink.com/content/l157014rr76j1j5p/ | |
dc.title | An implementation of training dual-nu support vector machines | |
dc.type | Book chapter | |
pubs.publication-status | Published |