Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Visual localization and segmentation based on foreground/background modeling|
|Citation:||2010 IEEE International Conference on Acoustics, Speech and Signal Processing: Proceedings; pp. 1158-1161|
|Series/Report no.:||International Conference on Acoustics Speech and Signal Processing ICASSP|
|Conference Name:||IEEE International Conference on Acoustics, Speech and Signal Processing (2010 : Dallas, Texas)|
|Hanzi Wang, Tat-Jun Chin and David Suter|
|Abstract:||In this paper, we propose a novel method to localize (or track) a foreground object and segment the foreground object from the surrounding background with occlusions for a moving camera. We measure the likelihood of a target position by using a combination of a generative model and a discriminative model, considering not only the foreground similarity to the target model but also the dissimilarity between the foreground and the background appearances. Object segmentation is treated as a binary labeling problem. A Markov Random Field (MRF) is employed to add a spatial smooth prior on the foreground/background patterns. We demonstrate the advantages of the proposed method on several challenging videos and compare our results with the results of several other popular methods. The proposed method has achieved good results.|
|Keywords:||Visual tracking; video segmentation; particle filters; appearance modeling; occlusions|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.