Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/104884
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, L.-
dc.contributor.authorShen, C.-
dc.contributor.authorvan den Hengel, A.-
dc.date.issued2016-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2016; 39(11):2305-2313-
dc.identifier.issn0162-8828-
dc.identifier.issn2160-9292-
dc.identifier.urihttp://hdl.handle.net/2440/104884-
dc.description.abstractRecent studies have shown that a Deep Convolutional Neural Network (DCNN) trained on a large image dataset can be used as a universal image descriptor and that doing so leads to impressive performance for a variety of image recognition tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the image representation. In this paper, we proposed a novel way to extract image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pre-trained CNN on densely sampled image regions and treats the fully-connected activations of each image region as convolutional layer feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on lower-level visual patterns while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting image representations from a DCNN. In addition, we also apply cross-layer pooling for image retrieval and propose schemes to reduce the computational cost. The experimental result suggests that the proposed method can also lead to promising results for the image retrieval task.-
dc.description.statementofresponsibilityLingqiao Liu, Chunhua Shen, Anton van den Hengel-
dc.language.isoen-
dc.publisherIEEE-
dc.rights© 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.-
dc.subjectConvolutional networks; deep learning; pooling; fine-grained object recognition-
dc.titleCross-convolutional-layer pooling for image recognition-
dc.typeJournal article-
dc.identifier.doi10.1109/TPAMI.2016.2637921-
dc.relation.granthttp://purl.org/au-research/grants/arc/FT120100969-
pubs.publication-statusPublished-
dc.identifier.orcidvan den Hengel, A. [0000-0003-3027-8364]-
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_104884.pdfRestricted Access476.72 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.