Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Direct semi-dense SLAM for rolling shutter cameras
Author: Kim, J.
Cadena, C.
Reid, I.
Citation: Proceedings of the 2016 IEEE International Conference on Robotics and Automation, 2016 / vol.2016-June, pp.1308-1315
Publisher: IEEE
Issue Date: 2016
Series/Report no.: IEEE International Conference on Robotics and Automation ICRA
ISBN: 9781467380263
ISSN: 1050-4729
Conference Name: 2016 IEEE International Conference on Robotics and Automation (ICRA 2016) (16 May 2016 - 21 May 2016 : Stockholm, Sweden)
Statement of
Jae-Hak Kim, Cesar Cadena and Ian Reid
Abstract: In this paper, we present a monocular Direct and Semi-dense SLAM, Simultaneous Localization And Mapping, system for rolling shutter cameras. In a rolling shutter camera, the pose is different for each row of each image, and this yields poor pose estimates and poor structure estimates when using a state-of-the-art semi-dense direct method designed for global shutter cameras. To address this issue in tracking, we model the smooth and continuous camera trajectory using a B-spline curve of degree k - 1 for poses in the Lie algebra, se, (3) .We solve for the camera poses at each row-time by a direct optimisation of photometric error as a function of the control points of the spline. Likewise for mapping, we develop generalised epipolar geometry for the rolling shutter case and solve for point depths using photometric error. Although each of these issues has been previously tackled, to the best of our knowledge ours is the first full solution to monocular, direct (feature-less) SLAM. We benchmark our method for pose accuracy and map accuracy against the state-of-the-art semi-dense SLAM system, LSD-SLAM, demonstrating the improved efficacy of our approach when using rolling shutter cameras via synthetic sequences with known ground-truth and real sequences.
Keywords: Cameras, splines, mathematics, simultaneous localization and mapping, image segmentation, trajectory, geometry, robot vision systems
Description: We are extremely grateful to the Australian Research Council for funding this research through project DP130104413, the ARC Centre for Robotic Vision CE140100016, and through a Laureate Fellowship FL130100102 to IDR. We thank Ankur Handa for his POV-Ray code [7], and Computer Vision Group in Technische Universität München for their LSD-SLAM code and RGBD-benchmark dataset. We also highly appreciate all discussion with and comments from Tom Drummond, Alireza Khosravian, Yasir Latif, Zygmunt Szpak and others in the Australian Centre for Robotic Vision and Australian Centre for Visual Technologies.
Rights: © 2016 Crown
RMID: 0030051775
DOI: 10.1109/ICRA.2016.7487263
Grant ID:
Appears in Collections:Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_107954.pdfRestricted Access3.66 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.