Learning discriminative representations for multi-label image recognition
Date
2022
Authors
Hassanin, M.
Radwan, I.
Khan, S.
Tahtali, M.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Journal article
Citation
Journal of Visual Communication and Image Representation, 2022; 83(103448):1-9
Statement of Responsibility
Conference Name
Abstract
Multi-label recognition is a fundamental, and yet is a challenging task in computer vision. Recently, deep learning models have achieved great progress towards learning discriminative features from input images. However, conventional approaches are unable to model the inter-class discrepancies among features in multi-label images, since they are designed to work for image-level feature discrimination. In this paper, we propose a unified deep network to learn discriminative features for the multi-label task. Given a multi-label image, the proposed method first disentangles features corresponding to different classes. Then, it discriminates between these classes via increasing the inter-class distance while decreasing the intra-class differences in the output space. By regularizing the whole network with the proposed loss, the performance of applying the well-known ResNet-101 is improved significantly. Extensive experiments have been performed on COCO-2014, VOC2007 and VOC2012 datasets, which demonstrate that the proposed method outperforms state-of-the-art approaches by a significant margin of 3.5% on large-scale COCO dataset. Moreover, analysis of the discriminative feature learning approach shows that it can be plugged into various types of multi-label methods as a general module.
School/Discipline
Dissertation Note
Provenance
Description
Link to a related website: https://unpaywall.org/10.1016/j.jvcir.2022.103448, Open Access via Unpaywall
Access Status
Rights
Copyright 2022 Elsevier Inc.