Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Journal article
Title: Pushing the limits of deep CNNs for pedestrian detection
Author: Hu, Q.
Wang, P.
Shen, C.
Van Den Hengel, A.
Porikli, F.
Citation: IEEE Transactions on Circuits and Systems for Video Technology, 2018; 28(6):1358-1368
Issue Date: 2018
ISSN: 1051-8215
Statement of
Qichang Hu, Peng Wang, Chunhua Shen, Anton van den Hengel, and Fatih Porikli
Abstract: Compared with other applications in computer vision, convolutional neural networks (CNNs) have underperformed on pedestrian detection. A breakthrough was made very recently using sophisticated deep CNN (DCNN) models, with a number of handcrafted features or explicit occlusion handling mechanism. In this paper, we show that by reusing the convolutional feature maps of a DCNN model as image features to train an ensemble of boosted decision models, we are able to achieve the best reported accuracy without using specially designed learning algorithms. We empirically identify and disclose important implementation details. We also show that pixel labeling may be simply combined with a detector to boost the detection performance. By adding complementary handcrafted features such as optical flow, the DCNN-based detector can be further improved. We advance the state-of-the-art results by lowering the log-average miss rate from 11.7% to 8.9% on the Caltech data set and from 11.2% to 8.6% on the Inria data set. We also achieve a comparable result to state-of-the-art approaches on the KITTI data set.
Keywords: Convolutional feature map, (CFM); ensemble model; pedestrian detection
Rights: © 2017 IEEE
RMID: 0030065255
DOI: 10.1109/TCSVT.2017.2648850
Appears in Collections:Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.