Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Multi-attention network for one shot learning|
van den Hengel, A.
|Citation:||Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017 / vol.2017-January, pp.6212-6220|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (21 Jul 2017 - 26 Jul 2017 : Honolulu, HI)|
|Peng Wang, Lingqiao Liu, Chunhua Shen, Zi Huang, Anton van den Hengel, Heng Tao Shen|
|Abstract:||One-shot learning is a challenging problem where the aim is to recognize a class identified by a single training image. Given the practical importance of one-shot learning, it seems surprising that the rich information present in the class tag itself has largely been ignored. Most existing approaches restrict the use of the class tag to finding similar classes and transferring classifiers or metrics learned thereon. We demonstrate here, in contrast, that the class tag can inform one-shot learning as a guide to visual attention on the training image for creating the image representation. This is motivated by the fact that human beings can better interpret a training image if the class tag of the image is understood. Specifically, we design a neural network architecture which takes the semantic embedding of the class tag to generate attention maps and uses those attention maps to create the image features for one-shot learning. Note that unlike other applications, our task requires that the learned attention generator can be generalized to novel classes. We show that this can be realized by representing class tags with distributed word embeddings and learning the attention map generator from an auxiliary training set. Also, we design a multiple-attention scheme to extract richer information from the exemplar image and this leads to substantial performance improvement. Through comprehensive experiments, we show that the proposed approach leads to superior performance over the baseline methods.|
|Rights:||© 2017 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.