Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||A scalable stagewise approach to large-margin multiclass loss-based boosting|
Van Den Hengel, A.
|Citation:||IEEE Transactions on Neural Networks and Learning Systems, 2014; 25(5):1002-1013|
|Sakrapee Paisitkriangkrai, Chunhua Shen, and Anton van den Hengel|
|Abstract:||We present a scalable and effective classification model to train multiclass boosting for multiclass classification problems. A direct formulation of multiclass boosting had been introduced in the past in the sense that it directly maximized the multiclass margin. The major problem of that approach is its high computational complexity during training, which hampers its application to real-world problems. In this paper, we propose a scalable and simple stagewise multiclass boosting method which also directly maximizes the multiclass margin. Our approach offers the following advantages: 1) it is simple and computationally efficient to train. The approach can speed up the training time by more than two orders of magnitude without sacrificing the classification accuracy and 2) like traditional AdaBoost, it is less sensitive to the choice of parameters and empirically demonstrates excellent generalization performance. Experimental results on challenging multiclass machine learning and vision tasks demonstrate that the proposed approach substantially improves the convergence rate and accuracy of the final visual detector at no additional computational cost compared to existing multiclass boosting.|
|Keywords:||Boosting; column generation; convex optimization; multiclass classification|
|Rights:||© 2013 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.