Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Growing semantically meaningful models for visual SLAM|
|Citation:||Proceedings of 23rd IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010 / pp.467-474|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||IEEE Conference on Computer Vision and Pattern Recognition (23rd : 2010 : San Francisco, CA)|
|Alex Flint, Christopher Mei, Ian Reid, and David Murray|
|Abstract:||Though modern Visual Simultaneous Localisation and Mapping (vSLAM) systems are capable of localising robustly and efficiently even in the case of a monocular camera, the maps produced are typically sparse point-clouds that are difficult to interpret and of little use for higher-level reasoning tasks such as scene understanding or human- machine interaction. In this paper we begin to address this deficiency, presenting progress on expanding the competency of visual SLAM systems to build richer maps. Specifically, we concentrate on modelling indoor scenes using semantically meaningful surfaces and accompanying labels, such as “floor”, “wall”, and “ceiling” - an important step towards a representation that can support higher-level reasoning and planning. We leverage the Manhattan world assumption and show how to extract vanishing directions jointly across a video stream. We then propose a guided line detector that utilises known vanishing points to extract extremely subtle axis- aligned edges. We utilise recent advances in single view structure recovery to building geometric scene models and demonstrate our system operating on-line.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.