Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/58160
Type: Conference paper
Title: Human action recognition using pyramid vocabulary tree
Author: Yuan, C.
Li, X.
Hu, W.
Wang, H.
Citation: Proceedings of The Ninth Asian Conference on Computer Vision -- (ACCV 2009), 23-27 September, 2009 / Hongbin Zha, Rin-ichiro Taniguchi, S. Maybank (eds.); Part 111, 11p.
Publisher: Springer
Publisher Place: China
Issue Date: 2009
ISBN: 9783642122965
Conference Name: Asian Conference on Computer Vision (9th : 2009 : Xi'an, China)
Statement of
Responsibility: 
Chunfeng Yuan, Xi Li, Weiming Hu and Hanzi Wang
Abstract: The bag-of-visual-words (BOVW) approaches are widely used in human action recognition. Usually, large vocabulary size of the BOVW is more discriminative for inter-class action classification while small one is more robust to noise and thus tolerant to the intra-class invariance. In this pape, we propose a pyramid vocabulary tree to model local spatio-temporal features, which can characterize the inter-class difference and also allow intra-class variance. Moreover, since BOVW is geometrically unconstrained, we further consider the spatio-temporal information of local features and propose a sparse spatio-temporal pyramid matching kernel (termed as SST-PMK) to compute the similarity measures between video sequences. SST-PMK satisfies the Mercer’s condition and therefore is readily integrated into SVM to perform action recognition. Experimental results on the Weizmann datasets show that both the pyramid vocabulary tree and the SST-PMK lead to a significant improvement in human action recognition.
Keywords: Action recognition, Bag-of-visual-words (BOVW), Pyramid matching kernel (PMK)
Rights: © 2009 The Pennsylvania State University
RMID: 0020095625
Published version: http://www.cs.jhu.edu/~hwang/papers/Action_ACCV09.pdf
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.