End-to-end Multi-Instance Robotic Reaching from Monocular Vision
Date
2021
Authors
Zhuang, Z.
Yu, X.
Mahony, R.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
IEEE International Conference on Robotics and Automation, 2021, vol.2021-May, pp.12974-12980
Statement of Responsibility
Zheyu Zhuang, Xin Yu, Robert Mahony
Conference Name
IEEE International Conference on Robotics and Automation (ICRA) (30 May 2021 - 5 Jun 2021 : Xi'an, China)
Abstract
Multi-instance scenes are especially challenging for end-to-end visuomotor (image-to-control) learning algorithms. "Pipeline" visual servo control algorithms use separate detection, selection and servo stages, allowing algorithms to focus on a single object instance during servo control. End-to-end systems do not have separate detection and selection stages and need to address the visual ambiguities introduced by the presence of an arbitrary number of visually identical or similar objects during servo control. However, end-to-end schemes avoid embedding errors from detection and selection stages in the servo control behaviour, are more dynamically robust to changing scenes and are algorithmically simpler. In this paper, we present a reactive real-time end-to-end visuomotor learning algorithm for multi-instance reaching. The proposed algorithm uses a monocular RGB image and the manipulator’s joint angles as the input to a light-weight fully-convolutional network (FCN) to generate control candidates. A key innovation of the proposed method is identifying the optimal control candidate by regressing a control-Lyapunov function (cLf) value. The multi-instance capability emerges naturally from the stability analysis associated with the cLf formulation. We demonstrate the proposed algorithm effectively reaching and grasping objects from different categories on a table-top amid other instances and distractors from an over-the-shoulder monocular RGB camera. The network is able to run up to ∼160 fps during inference on one GTX 1080 Ti GPU.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2021 IEEE