Boosting Descriptors Condensed from Video Sequences for Place Recognition

Date

2008

Authors

Chin, T.
Goh, H.
Lim, J.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008: pp.1-8

Statement of Responsibility

Tat-Jun Chin, Hanlin Goh and Joo-Hwee Lim

Conference Name

IEEE Conference on Computer Vision and Pattern Recognition (21st : 2008 : Anchorage, AK)

Abstract

We investigate the task of efficiently training classifiers to build a robust place recognition system. We advocate an approach which involves densely capturing the facades of buildings and landmarks with video recordings to greedily accumulate as much visual information as possible. Our contributions include (1) a preprocessing step to effectively exploit the temporal continuity intrinsic in the video sequences to dramatically increase training efficiency, (2) training sparse classifiers discriminatively with the resulting data using the AdaBoost principle for place recognition, and (3) methods to speed up recognition using scaled kd-trees and to perform geometric validation on the results. Compared to straightforwardly applying scene recognition methods, our method not only allows a much faster training phase, the resulting classifiers are also more accurate. The sparsity of the classifiers also ensures good potential for recognition at high frame rates. We show extensive experimental results to validate our claims.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

License

Grant ID

Call number

Persistent link to this record