AffordanceNet: an end-to-end deep learning approach for object affordance detection
Date
2018
Authors
Do, T.
Nguyen, A.
Reid, I.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
IEEE International Conference on Robotics and Automation, 2018, pp.5882-5889
Statement of Responsibility
Thanh-Toan Do, Anh Nguyen, Ian Reid
Conference Name
IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)
Abstract
We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
©2018 IEEE