Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Dynamic and hierarchical multi-structure geometric model fitting
Author: Wong, H.
Chin, T.
Yu, J.
Suter, D.
Citation: ICCV 2011: Proceedings of the 2011 IEEE International Conference on Computer Vision, 2011: pp. 1044-1051
Publisher: IEEE
Publisher Place: USA
Issue Date: 2011
Series/Report no.: IEEE International Conference on Computer Vision
ISBN: 9781457711015
ISSN: 1550-5499
Conference Name: International Conference on Computer Vision (13th : 2011 : Barcelona, Spain)
Statement of
Hoi Sim Wong, Tat-Jun Chin, Jin Yu and David Suter
Abstract: The ability to generate good model hypotheses is instrumental to accurate and robust geometric model fitting. We present a novel dynamic hypothesis generation algorithm for robust fitting of multiple structures. Underpinning our method is a fast guided sampling scheme enabled by analysing correlation of preferences induced by data and hypothesis residuals. Our method progressively accumulates evidence in the search space, and uses the information to dynamically (1) identify outliers, (2) filter unpromising hypotheses, and (3) bias the sampling for active discovery of multiple structures in the data-All achieved without sacrificing the speed associated with sampling-based methods. Our algorithm yields a disproportionately higher number of good hypotheses among the sampling outcomes, i.e., most hypotheses correspond to the genuine structures in the data. This directly supports a novel hierarchical model fitting algorithm that elicits the underlying stratified manner in which the structures are organized, allowing more meaningful results than traditional “flat” multi-structure fitting.
Rights: © 2011 IEEE
RMID: 0020115712
DOI: 10.1109/ICCV.2011.6126350
Description (link):
Appears in Collections:Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_65276.pdfRestricted Access2.55 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.