Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Real-time joint semantic segmentation and depth estimation using asymmetric annotations
Author: Nekrasov, V.
Dharmasiri, T.
Spek, A.
Drummond, T.
Shen, C.
Reid, I.
Citation: Proceedings of the 2019 IEEE International Conference on Robotics and Automation (ICRA), 2019 / vol.2019-May, pp.7101-7107
Publisher: IEEE
Issue Date: 2019
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 153866027X
ISSN: 1050-4729
Conference Name: IEEE International Conference on Robotics and Automation (ICRA) (20 May 2019 - 24 May 2019 : Montreal, Canada)
Statement of
Vladimir Nekrasov, Thanuja Dharmasiri, Andrew Spek, Tom Drummond, Chunhua Shen and Ian Reid
Abstract: Deployment of deep learning models in robotics as sensory information extractors can be a daunting task to handle, even using generic GPU cards. Here, we address three of its most prominent hurdles, namely, i) the adaptation of a single model to perform multiple tasks at once (in this work, we consider depth estimation and semantic segmentation crucial for acquiring geometric and semantic understanding of the scene), while ii) doing it in real-time, and iii) using asymmetric datasets with uneven numbers of annotations per each modality. To overcome the first two issues, we adapt a recently proposed real-time semantic segmentation network, making changes to further reduce the number of floating point operations. To approach the third issue, we embrace a simple solution based on hard knowledge distillation under the assumption of having access to a powerful `teacher' network. We showcase how our system can be easily extended to handle more tasks, and more datasets, all at once, performing depth estimation and segmentation both indoors and outdoors with a single model. Quantitatively, we achieve results equivalent to (or better than) current state-of-the-art approaches with one forward pass costing just 13ms and 6.5 GFLOPs on 640×480 inputs. This efficiency allows us to directly incorporate the raw predictions of our network into the SemanticFusion framework [1] for dense 3D semantic reconstruction of the scene.
Rights: ©2019 IEEE
RMID: 0030100139
DOI: 10.1109/ICRA.2019.8794220
Grant ID:
Published version:
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.