Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Attend in groups: a weakly-supervised deep learning framework for learning from web data|
|Citation:||Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017 / vol.2017-January, pp.2915-2924|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (21 Jul 2017 - 26 Jul 2017 : Honolulu, HI)|
|Bohan Zhuang, Lingqiao Liu, Yao Li, Chunhua Shen, Ian Reid|
|Abstract:||Large-scale datasets have driven the rapid development of deep neural networks for visual recognition. However, annotating a massive dataset is expensive and time-consuming. Web images and their labels are, in comparison, much easier to obtain, but direct training on such automatically harvested images can lead to unsatisfactory performance, because the noisy labels of Web images adversely affect the learned recognition models. To address this drawback we propose an end-to-end weakly-supervised deep learning framework which is robust to the label noise in Web images. The proposed framework relies on two unified strategies - random grouping and attention - to effectively reduce the negative impact of noisy web image annotations. Specifically, random grouping stacks multiple images into a single training instance and thus increases the labeling accuracy at the instance level. Attention, on the other hand, suppresses the noisy signals from both incorrectly labeled images and less discriminative image regions. By conducting intensive experiments on two challenging datasets, including a newly collected fine-grained dataset with Web images of different car models, 1, the superior performance of the proposed methods over competitive baselines is clearly demonstrated.|
|Rights:||© 2017 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.