Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Journal article
Title: FVQA: fact-based Visual Question Answering
Author: Wang, P.
Wu, Q.
Shen, C.
Dick, A.
Van Den Hengel, A.
Citation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017; 40(10):2413-2427
Publisher: IEEE
Issue Date: 2017
ISSN: 0162-8828
Statement of
Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton van den Hengel
Abstract: Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as < Cat,CapableOf,ClimbingTrees> . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.
Rights: © 2017 IEEE
RMID: 0030076653
DOI: 10.1109/TPAMI.2017.2754246
Grant ID:
Appears in Collections:Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.