Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Using Densely Recorded Scenes for Place Recognition|
|Citation:||Proceedings of the 33rd IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, Nevada, USA., 2008: pp.2101-2104|
|Conference Name:||IEEE International Conference on Acoustics, Speech and Signal Processing (33rd : 2008 : Las Vegas, Nevada)|
|Tat-Jun Chin, Hanlin Goh and Joo-Hwee Lim|
|Abstract:||We investigate the task of efficiently modeling a scene to build a robust place recognition system. We propose an approach which involves densely capturing a place with video recordings to greedily cover as many viewpoints of the place as possible. Our contribution is a framework to (1) effectively exploit the temporal continuity intrinsic in the video sequences to reduce the amount of data to process without losing the unique visual information which describes a place, and (2) train discriminative classifiers with the reduced data for place recognition. We show that our method is more efficient and effective than straightforwardly applying scene or object category recognition methods on the video frames.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.