Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/72210
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Incremental learning of 3D-DCT compact representations for robust visual tracking
Author: Li, X.
Dick, A.
Shen, C.
Van Den Hengel, A.
Wang, H.
Citation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013; 35(4):863-881
Publisher: IEEE Computer Soc
Issue Date: 2013
ISSN: 0162-8828
1939-3539
Statement of
Responsibility: 
Xi Li, Anthony Dick, Chunhua Shen, Anton van den Hengel, and Hanzi Wang
Abstract: Visual tracking usually requires an object appearance model that is robust to changing illumination, pose and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: (a) the bases are data driven, so they can be easily corrupted; and (b) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions, which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm, which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.
Keywords: Visual tracking
appearance model
compact representation
discrete cosine transform (DCT)
incremental learning
template matching
Rights: © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
DOI: 10.1109/TPAMI.2012.166
Grant ID: http://purl.org/au-research/grants/arc/DP1094764
http://purl.org/au-research/grants/arc/DP1094764
Published version: http://dx.doi.org/10.1109/tpami.2012.166
Appears in Collections:Aurora harvest 5
Computer Science publications

Files in This Item:
File Description SizeFormat 
hdl_72210.pdfAccepted version9.11 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.