Generalised Zero-shot Learning with Multi-modal Embedding Spaces
Date
2020
Authors
Felix, R.
Sasdelli, M.
Harwood, B.
Carneiro, G.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings of the Digital Image Computing: Techniques and Applications (DICTA 2020), 2020, pp.1-8
Statement of Responsibility
Rafael Felix, Michele Sasdelli, Ben Harwood, Gustavo Carneiro
Conference Name
Digital Image Computing: Techniques and Applications (DICTA) (29 Nov 2020 - 2 Dec 2020 : virtual online)
Abstract
Generalised zero-shot learning (GZSL) methods aim to classify previously seen and unseen visual classes by leveraging the semantic information of those classes. In the context of GZSL, semantic information is non-visual data such as a text description of the seen and unseen classes. Previous GZSL methods have explored transformations between visual and semantic spaces, as well as the learning of a latent joint visual and semantic space. In these methods, even though learning has explored a combination of spaces (i.e., visual, semantic or joint latent space), inference tended to focus on using just one of the spaces. By hypothesising that inference must explore all three spaces, we propose a new GZSL method based on a multimodal classification over visual, semantic and joint latent spaces. Another issue affecting current GZSL methods is the intrinsic bias toward the classification of seen classes – a problem that is usually mitigated by a domain classifier which modulates seen and unseen classification. Our proposed approach replaces the modulated classification by a computationally simpler multidomain classification based on averaging the multi-modal calibrated classifiers from the seen and unseen domains. Experiments on GZSL benchmarks show that our proposed GZSL approach achieves competitive results compared with the state-of-the-art.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
©2020 IEEE