Why only text: empowering vision-and-language navigation with multi-modal prompts

dc.contributor.authorHong, H.
dc.contributor.authorWang, S.
dc.contributor.authorHuang, Z.
dc.contributor.authorWu, Q.
dc.contributor.authorLiu, J.
dc.contributor.conference33rd International Joint Conference on Artificial Intelligence (IJCAI) (3 Aug 2024 - 9 Aug 2024 : Jeju, Jeju Island, South Korea.)
dc.contributor.editorLarson, K.
dc.date.issued2024
dc.description.abstractCurrent Vision-and-Language Navigation (VLN) tasks mainly employ textual instructions to guide agents. However, being inherently abstract, the same textual instruction can be associated with different visual signals, causing severe ambiguity and limiting the transfer of prior knowledge in the vision domain from the user to the agent. To fill this gap, we propose Vision-and-Language Navigation with Multi-modal Prompts (VLN-MP), a novel task augmenting traditional VLN by integrating both natural language and images in instructions. VLNMP not only maintains backward compatibility by effectively handling text-only prompts but also consistently shows advantages with different quantities and relevance of visual prompts. Possible forms of visual prompts include both exact and similar object images, providing adaptability and versatility in diverse navigation scenarios. To evaluate VLN-MP under a unified framework, we implement a new benchmark that offers: (1) a trainingfree pipeline to transform textual instructions into multi-modal forms with landmark images; (2) diverse datasets with multi-modal instructions for different downstream tasks; (3) a novel module designed to process various image prompts for seamless integration with state-of-the-art VLN models. Extensive experiments on four VLN benchmarks (R2R, RxR, REVERIE, CVDN) show that incorporating visual prompts significantly boosts navigation performance. While maintaining efficiency with text-only prompts, VLN-MP enables agents to navigate in the pre-explore setting and outperform text-based models, showing its broader applicability. Code is available at https://github.com/ honghd16/VLN-MP.
dc.description.statementofresponsibilityHaodong Hong, Sen Wang, Zi Huang, Qi Wu and Jiajun Liu
dc.identifier.citationIJCAI : proceedings of the conference / sponsored by the International Joint Conferences on Artificial Intelligence, 2024 / Larson, K. (ed./s), pp.839-847
dc.identifier.doi10.24963/ijcai.2024/93
dc.identifier.isbn978-1-956792-04-1
dc.identifier.issn1045-0823
dc.identifier.orcidWu, Q. [0000-0003-3631-256X]
dc.identifier.urihttps://hdl.handle.net/2440/145726
dc.language.isoen
dc.publisherInternational Joint Conferences on Artificial Intelligence Organisation
dc.relation.granthttp://purl.org/au-research/grants/arc/DE200101610
dc.relation.ispartofseriesIJCAI : proceedings of the conference
dc.rights© 2024 International Joint Conferences on Artificial Intelligence All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.
dc.source.urihttps://www.ijcai.org/Proceedings/2024/
dc.titleWhy only text: empowering vision-and-language navigation with multi-modal prompts
dc.typeConference paper
pubs.publication-statusPublished

Files

Collections