Please use this identifier to cite or link to this item:
|Web of Science®
|Boosting Descriptors Condensed from Video Sequences for Place Recognition
|IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008: pp.1-8
|IEEE Conference on Computer Vision and Pattern Recognition (21st : 2008 : Anchorage, AK)
|Tat-Jun Chin, Hanlin Goh and Joo-Hwee Lim
|We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.
|Appears in Collections:
|Aurora harvest 5
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.