Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Crowd counting via weighted VLAD on a dense attribute feature map|
|Citation:||IEEE Transactions on Circuits and Systems for Video Technology, 2018; 28(8):1788-1797|
|Biyun Sheng, Chunhua Shen, Guosheng Lin, Jun Li, Wankou Yang, Changyin Sun|
|Abstract:||Crowd counting is an important task in computer vision, which has many applications in video surveillance. Although the regression-based framework has achieved great improvements for crowd counting, how to improve the discriminative power of image representation is still an open problem. Conventional holistic features used in crowd counting often fail to capture semantic attributes and spatial cues of the image. In this paper, we propose integrating semantic information into learning locality-aware feature (LAF) sets for accurate crowd counting. First, with the help of a convolutional neural network, the original pixel space is mapped onto a dense attribute feature map, where each dimension of the pixelwise feature indicates the probabilistic strength of a certain semantic class. Then, LAF built on the idea of spatial pyramids on neighboring patches is proposed to explore more spatial context and local information. Finally, the traditional vector of locally aggregated descriptor (VLAD) encoding method is extended to a more generalized form weighted-VLAD (W-VLAD) in which diverse coefficient weights are taken into consideration. Experimental results validate the effectiveness of our presented method.|
|Keywords:||Semantics; feature extraction; image representation; encoding; roads; neural networks; image segmentation|
|Rights:||© 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.|
|Appears in Collections:||Aurora harvest 8|
Electrical and Electronic Engineering publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.