Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Exploiting temporal information for DCNN-based fine-grained object classification
Author: Ge, Z.
McCool, C.
Sanderson, C.
Wang, P.
Liu, L.
Reid, I.
Corke, P.
Citation: Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2016), 2016 / Liew, A., Lovell, B., Fookes, C., Zhou, J., Gao, Y., Blumenstein, M., Wang, Z. (ed./s), pp.442-447
Publisher: IEEE
Issue Date: 2016
ISBN: 9781509028962
Conference Name: International Conference on Digital Image Computing: Techniques and Applications (DICTA 2016) (30 Nov 2016 - 2 Dec 2016 : Gold Coast, AUSTRALIA)
Editor: Liew, A.
Lovell, B.
Fookes, C.
Zhou, J.
Gao, Y.
Blumenstein, M.
Wang, Z.
Statement of
Zong Yuan Ge, Chris McCool, Conrad Sanderson, Peng Wang, Lingqiao Liu, Ian Reid, Peter Corke
Abstract: Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fullyconnected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co-occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio- Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.
Rights: ©2016 IEEE
DOI: 10.1109/DICTA.2016.7797039
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.