Robust Foreground Segmentation Based on Two Effective Background Models

Date

2008

Authors

Li, X.
Hu, W.
Zhang, Z.
Zhang, X.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

MM’08 : Proceedings of the 2008 ACM International Conference on Multimedia, with co-located Symposium & Workshops Vancouver, BC, Canada, October 27–31, 2008 / pp.223-228

Statement of Responsibility

Xi Li, Weiming Hu, Zhongfei Zhang, Xiaoqin Zhang

Conference Name

ACM International Conference on Multimedia Information Retrieval (1st : 2008 : Vancouver, Canada)

Abstract

Foreground segmentation is a common foundation for many computer vision applications such as tracking and behavior analysis. Most existing algorithms for foreground segmentation learn pixel-based statistical models, which are sensitive to dynamic scenes such as illumination change, shadow movement, and swaying trees. In order to address this problem, we propose two block-based background models using the recently developed incremental rank-(R1, R2, R3) tensor-based subspace learning algorithm (referred to as IRTSA [1]). These two IRTSA-based background models (i.e., IRTSAGBM and IRTSA-CBM respectively for grayscale and color images) incrementally learn low-order tensor-based eigenspace representations to fully capture the intrinsic spatio-temporal characteristics of a scene, leading to robust foreground segmentation results. Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed background models.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright 2008 ACM

License

Grant ID

Call number

Persistent link to this record