Order-aware convolutional pooling for video based action recognition

Date

2019

Authors

Wang, P.
Liu, L.
Shen, C.
Shen, H.T.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

Pattern Recognition, 2019; 91:357-365

Statement of Responsibility

Peng Wang, Lingqiao Liu, Chunhua Shen, Heng Tao Shen

Conference Name

Abstract

Most video based action recognition approaches create the video-level representation by temporally pooling the features extracted at every frame. The pooling methods they adopt, however, usually completely or partially ignore the dynamic information contained in the temporal domain, which may undermine the discriminative power of the resulting video representation since the video sequence order could unveil the evolution of a specific event or action. To overcome this drawback and explore the importance of incorporating the temporal order information, in this paper we propose a novel temporal pooling approach to aggregate the frame-level features. Inspired by the capacity of Convolutional Neural Networks (CNN) in making use of the internal structure of images for information abstraction, we propose to apply the temporal convolution operation to the frame-level representations to extract the dynamic information. However, directly implementing this idea on the original high-dimensional feature will result in parameter explosion. To handle this issue, we propose to treat the temporal evolution of the feature value at each feature dimension as a 1D signal and learn a unique convolutional filter bank for each 1D signal. By conducting experiments on three challenging video-based action recognition datasets, HMDB51, UCF101, and Hollywood2, we demonstrate that the proposed method is superior to the conventional pooling methods.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2019 Elsevier Ltd. All rights reserved.

License

Grant ID

Call number

Persistent link to this record