Why only text: empowering vision-and-language navigation with multi-modal prompts
Date
2024
Authors
Hong, H.
Wang, S.
Huang, Z.
Wu, Q.
Liu, J.
Editors
Larson, K.
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
IJCAI : proceedings of the conference / sponsored by the International Joint Conferences on Artificial Intelligence, 2024 / Larson, K. (ed./s), pp.839-847
Statement of Responsibility
Haodong Hong, Sen Wang, Zi Huang, Qi Wu and Jiajun Liu
Conference Name
33rd International Joint Conference on Artificial Intelligence (IJCAI) (3 Aug 2024 - 9 Aug 2024 : Jeju, Jeju Island, South Korea.)
Abstract
Current Vision-and-Language Navigation (VLN) tasks mainly employ textual instructions to guide agents. However, being inherently abstract, the same textual instruction can be associated with different visual signals, causing severe ambiguity and limiting the transfer of prior knowledge in the vision domain from the user to the agent. To fill this gap, we propose Vision-and-Language Navigation with Multi-modal Prompts (VLN-MP), a novel task augmenting traditional VLN by integrating both natural language and images in instructions. VLNMP not only maintains backward compatibility by effectively handling text-only prompts but also consistently shows advantages with different quantities and relevance of visual prompts. Possible forms of visual prompts include both exact and similar object images, providing adaptability and versatility in diverse navigation scenarios. To evaluate VLN-MP under a unified framework, we implement a new benchmark that offers: (1) a trainingfree pipeline to transform textual instructions into multi-modal forms with landmark images; (2) diverse datasets with multi-modal instructions for different downstream tasks; (3) a novel module designed to process various image prompts for seamless integration with state-of-the-art VLN models. Extensive experiments on four VLN benchmarks (R2R, RxR, REVERIE, CVDN) show that incorporating visual prompts significantly boosts navigation performance. While maintaining efficiency with text-only prompts, VLN-MP enables agents to navigate in the pre-explore setting and outperform text-based models, showing its broader applicability. Code is available at https://github.com/ honghd16/VLN-MP.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2024 International Joint Conferences on Artificial Intelligence All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.