Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/65199
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Visual localization and segmentation based on foreground/background modeling
Author: Wang, H.
Chin, T.
Suter, D.
Citation: 2010 IEEE International Conference on Acoustics, Speech and Signal Processing: Proceedings; pp. 1158-1161
Publisher: IEEE
Publisher Place: USA
Issue Date: 2010
Series/Report no.: International Conference on Acoustics Speech and Signal Processing ICASSP
ISBN: 9781424442966
ISSN: 1520-6149
Conference Name: IEEE International Conference on Acoustics, Speech and Signal Processing (2010 : Dallas, Texas)
Statement of
Responsibility: 
Hanzi Wang, Tat-Jun Chin and David Suter
Abstract: In this paper, we propose a novel method to localize (or track) a foreground object and segment the foreground object from the surrounding background with occlusions for a moving camera. We measure the likelihood of a target position by using a combination of a generative model and a discriminative model, considering not only the foreground similarity to the target model but also the dissimilarity between the foreground and the background appearances. Object segmentation is treated as a binary labeling problem. A Markov Random Field (MRF) is employed to add a spatial smooth prior on the foreground/background patterns. We demonstrate the advantages of the proposed method on several challenging videos and compare our results with the results of several other popular methods. The proposed method has achieved good results.
Keywords: Visual tracking
video segmentation
particle filters
appearance modeling
occlusions
Rights: ©2010 IEEE
DOI: 10.1109/ICASSP.2010.5495372
Description (link): http://www.icassp2010.com/
Published version: http://dx.doi.org/10.1109/icassp.2010.5495372
Appears in Collections:Aurora harvest
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.