Why only text: empowering vision-and-language navigation with multi-modal prompts
dc.contributor.author | Hong, H. | |
dc.contributor.author | Wang, S. | |
dc.contributor.author | Huang, Z. | |
dc.contributor.author | Wu, Q. | |
dc.contributor.author | Liu, J. | |
dc.contributor.conference | 33rd International Joint Conference on Artificial Intelligence (IJCAI) (3 Aug 2024 - 9 Aug 2024 : Jeju, Jeju Island, South Korea.) | |
dc.contributor.editor | Larson, K. | |
dc.date.issued | 2024 | |
dc.description.abstract | Current Vision-and-Language Navigation (VLN) tasks mainly employ textual instructions to guide agents. However, being inherently abstract, the same textual instruction can be associated with different visual signals, causing severe ambiguity and limiting the transfer of prior knowledge in the vision domain from the user to the agent. To fill this gap, we propose Vision-and-Language Navigation with Multi-modal Prompts (VLN-MP), a novel task augmenting traditional VLN by integrating both natural language and images in instructions. VLNMP not only maintains backward compatibility by effectively handling text-only prompts but also consistently shows advantages with different quantities and relevance of visual prompts. Possible forms of visual prompts include both exact and similar object images, providing adaptability and versatility in diverse navigation scenarios. To evaluate VLN-MP under a unified framework, we implement a new benchmark that offers: (1) a trainingfree pipeline to transform textual instructions into multi-modal forms with landmark images; (2) diverse datasets with multi-modal instructions for different downstream tasks; (3) a novel module designed to process various image prompts for seamless integration with state-of-the-art VLN models. Extensive experiments on four VLN benchmarks (R2R, RxR, REVERIE, CVDN) show that incorporating visual prompts significantly boosts navigation performance. While maintaining efficiency with text-only prompts, VLN-MP enables agents to navigate in the pre-explore setting and outperform text-based models, showing its broader applicability. Code is available at https://github.com/ honghd16/VLN-MP. | |
dc.description.statementofresponsibility | Haodong Hong, Sen Wang, Zi Huang, Qi Wu and Jiajun Liu | |
dc.identifier.citation | IJCAI : proceedings of the conference / sponsored by the International Joint Conferences on Artificial Intelligence, 2024 / Larson, K. (ed./s), pp.839-847 | |
dc.identifier.doi | 10.24963/ijcai.2024/93 | |
dc.identifier.isbn | 978-1-956792-04-1 | |
dc.identifier.issn | 1045-0823 | |
dc.identifier.orcid | Wu, Q. [0000-0003-3631-256X] | |
dc.identifier.uri | https://hdl.handle.net/2440/145726 | |
dc.language.iso | en | |
dc.publisher | International Joint Conferences on Artificial Intelligence Organisation | |
dc.relation.grant | http://purl.org/au-research/grants/arc/DE200101610 | |
dc.relation.ispartofseries | IJCAI : proceedings of the conference | |
dc.rights | © 2024 International Joint Conferences on Artificial Intelligence All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. | |
dc.source.uri | https://www.ijcai.org/Proceedings/2024/ | |
dc.title | Why only text: empowering vision-and-language navigation with multi-modal prompts | |
dc.type | Conference paper | |
pubs.publication-status | Published |