Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/84202
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFlint, A.-
dc.contributor.authorMei, C.-
dc.contributor.authorReid, I.-
dc.contributor.authorMurray, D.-
dc.date.issued2010-
dc.identifier.citationProceedings of 23rd IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010 / pp.467-474-
dc.identifier.isbn9781424469840-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/2440/84202-
dc.description.abstractThough modern Visual Simultaneous Localisation and Mapping (vSLAM) systems are capable of localising robustly and efficiently even in the case of a monocular camera, the maps produced are typically sparse point-clouds that are difficult to interpret and of little use for higher-level reasoning tasks such as scene understanding or human- machine interaction. In this paper we begin to address this deficiency, presenting progress on expanding the competency of visual SLAM systems to build richer maps. Specifically, we concentrate on modelling indoor scenes using semantically meaningful surfaces and accompanying labels, such as “floor”, “wall”, and “ceiling” - an important step towards a representation that can support higher-level reasoning and planning. We leverage the Manhattan world assumption and show how to extract vanishing directions jointly across a video stream. We then propose a guided line detector that utilises known vanishing points to extract extremely subtle axis- aligned edges. We utilise recent advances in single view structure recovery to building geometric scene models and demonstrate our system operating on-line.-
dc.description.statementofresponsibilityAlex Flint, Christopher Mei, Ian Reid, and David Murray-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE Conference on Computer Vision and Pattern Recognition-
dc.rights©2010 IEEE-
dc.source.urihttp://dx.doi.org/10.1109/cvpr.2010.5540176-
dc.titleGrowing semantically meaningful models for visual SLAM-
dc.typeConference paper-
dc.contributor.conferenceIEEE Conference on Computer Vision and Pattern Recognition (23rd : 2010 : San Francisco, CA)-
dc.identifier.doi10.1109/CVPR.2010.5540176-
dc.publisher.placeUSA-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
Appears in Collections:Aurora harvest
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.