Please use this identifier to cite or link to this item:
Scopus Web of ScienceĀ® Altmetric
Type: Conference paper
Title: Using Densely Recorded Scenes for Place Recognition
Author: Chin, T.J.
Goh, H.
Lim, J.H.
Citation: Proceedings of the 33rd IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, Nevada, USA., 2008: pp.2101-2104
Publisher: IEEE
Publisher Place: Online
Issue Date: 2008
ISBN: 9781424414833
Conference Name: IEEE International Conference on Acoustics, Speech and Signal Processing (33rd : 2008 : Las Vegas, Nevada)
Statement of
Tat-Jun Chin, Hanlin Goh and Joo-Hwee Lim
Abstract: We investigate the task of efficiently modeling a scene to build a robust place recognition system. We propose an approach which involves densely capturing a place with video recordings to greedily cover as many viewpoints of the place as possible. Our contribution is a framework to (1) effectively exploit the temporal continuity intrinsic in the video sequences to reduce the amount of data to process without losing the unique visual information which describes a place, and (2) train discriminative classifiers with the reduced data for place recognition. We show that our method is more efficient and effective than straightforwardly applying scene or object category recognition methods on the video frames.
RMID: 0020093437
DOI: 10.1109/ICASSP.2008.4518056
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.