Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Depth based semantic scene completion with position importance aware loss|
|Citation:||IEEE Robotics and Automation Letters, 2020; 5(1):219-226|
|Jie Li, Yu Liu, Xia Yuan, Chunxia Zhao, Roland Siegwart, Ian Reid, Cesar Cadena|
|Abstract:||Semantic scene completion (SSC) refers to the task of inferring the 3D semantic segmentation of a scene while simultaneously completing the 3D shapes. We propose PALNet, a novel hybrid network for SSC based on single depth. PALNet utilizes a two-stream network to extract both 2D and 3D features from multi-stages using fine-grained depth information to efficiently capture the context, as well as the geometric cues of the scene. Current methods for SSC treat all parts of the scene equally causing unnecessary attention to the interior of objects. To address this problem, we propose Position Aware Loss (PA-Loss) which is position importance aware while training the network. Specifically, PA-Loss considers Local Geometric Anisotropy to determine the importance of different positions within the scene. It is beneficial for recovering key details like the boundaries of objects and the corners of the scene. Comprehensive experiments on two benchmark datasets demonstrate the effectiveness of the proposed method and its superior performance. Code and demo 1 1 Video demo can be found here: https://youtu.be/j-LAMcMh0yg.|
|Keywords:||Semantic scene understanding; deep learning in robotics and automation; RGB-D perception|
|Rights:||© 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.