Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Hierarchical latent concept discovery for video event detection|
|Citation:||IEEE Transactions on Image Processing, 2017; 26(5):2149-2162|
|Publisher:||Institute of Electrical and Electronics Engineers|
|Chao Li, Zi Huang, Yang Yang, Jiewei Cao, Xiaoshuai Sun, Heng Tao Shen|
|Abstract:||Semantic information is important for video event detection. How to automatically discover, model, and utilize semantic information to facilitate video event detection has been a challenging problem. In this paper, we propose a novel hierarchical video event detection model, which deliberately unifies the processes of underlying semantics discovery and event modeling from video data. Specially, different from most of the approaches based on manually pre-defined concepts, we devise an effective model to automatically uncover video semantics by hierarchically capturing latent static-visual concepts in frame-level and latent activity concepts (i.e., temporal sequence relationships of static-visual concepts) in segment-level. The unified model not only enables a discriminative and descriptive representation for videos, but also alleviates error propagation problem from video representation to event modeling existing in previous methods. A max-margin framework is employed to learn the model. Extensive experiments on four challenging video event datasets, i.e., MED11, CCV, UQE50, and FCVID, have been conducted to demonstrate the effectiveness of the proposed method.|
|Keywords:||Event detection; latent concepts; semantic information|
|Rights:||© 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.|
|Appears in Collections:||Aurora harvest 4|
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.