Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments
Date
2018
Authors
Anderson, P.
Wu, Q.
Teney, D.
Bruce, J.
Johnson, M.
Sünderhauf, N.
Reid, I.D.
Gould, S.
Hengel, A.V.D.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, vol.abs/1711.07280, pp.3674-3683
Statement of Responsibility
Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S, underhauf, Ian Reid, Stephen Gould, Anton van den Hengel
Conference Name
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (18 Jun 2018 - 23 Jun 2018 : Salt Lake City, UT)
Abstract
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2018 IEEE