Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/64728
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Multi-structure model selection via kernel optimisation
Author: Chin, T.J.
Suter, D.
Wang, H.
Citation: Proceedings of 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010; pp.3586-3593
Publisher: IEEE COMPUTER SOC
Publisher Place: 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA
Issue Date: 2010
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781424469840
ISSN: 1063-6919
Conference Name: IEEE Conference on Computer Vision and Pattern Recognition (23rd : 2010 : San Francisco, CA)
Statement of
Responsibility: 
Tat-Jun Chin, David Suter and Hanzi Wang
Abstract: Our goal is to fit the multiple instances (or structures) of a generic model existing in data. Here we propose a novel model selection scheme to estimate the number of genuine structures present. In contrast to conventional model selection approaches, our method is driven by kernel-based learning. The input data is first clustered based on their potential to have emerged from the same structure. However the number of clusters is deliberately overestimated to obtain a set of initial model fits onto the data. We then resolve the oversegmentation via a series of kernel optimisation conducted through multiple kernel learning, and the concept of kernel-target alignment is used as a model selection criterion. Experiments on synthetic and real data show that our method outperforms previous model selection schemes. We also focus on the application of multi-body motion segmentation. In particular we demonstrate success on estimating the number of motions on sequences with more than 3 unique motions.
Rights: ©2010 IEEE
RMID: 0020105302
DOI: 10.1109/CVPR.2010.5539931
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.