Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Leveraging weak semantic relevance for complex video event classification
Author: Li, C.
Cao, J.
Huang, Z.
Zhu, L.
Shen, H.T.
Citation: Proceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2017, pp.3667-3676
Publisher: IEEE
Issue Date: 2017
Series/Report no.: IEEE International Conference on Computer Vision
ISBN: 9781538610329
ISSN: 1550-5499
Conference Name: IEEE International Conference on Computer Vision (ICCV) (22 Oct 2017 - 29 Oct 2017 : Venice, ITALY)
Statement of
Chao Li, Jiewei Cao, Zi Huang, Lei Zhu, Heng Tao Shen
Abstract: Existing video event classification approaches suffer from limited human-labeled semantic annotations. Weak semantic annotations can be harvested from Web-knowledge without involving any human interaction. However such weak annotations are noisy, thus can not be effectively utilized without distinguishing its reliability. In this paper, we propose a novel approach to automatically maximize the utility of weak semantic annotations (formalized as the semantic relevance of video shots to the target event) to facilitate video event classification. A novel attention model is designed to determine the attention scores of video shots, where the weak semantic relevance is considered as attentional guidance. Specifically, our model jointly optimizes two objectives at different levels. The first one is the classification loss corresponding to video-level groundtruth labels, and the second is the shot-level relevance loss corresponding to weak semantic relevance. We use a long short-term memory (LSTM) layer to capture the temporal information carried by the shots of a video. In each timestep, the LSTM employs the attention model to weight the current shot under the guidance of its weak semantic relevance to the event of interest. Thus, we can automatically exploit weak semantic relevance to assist video event classification. Extensive experiments have been conducted on three complex largescale video event datasets i.e., MEDTest14, ActivityNet and FCVID. Our approach achieves the state-of-the-art classification performance on all three datasets. The significant performance improvement upon the conventional attention model also demonstrates the effectiveness of our model.
Keywords: Semantics; reliability; visualization; testing; computer vision; noise measurement
Rights: © 2017 IEEE
DOI: 10.1109/ICCV.2017.394
Grant ID:
Published version:
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.