Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/55486
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, H.en
dc.contributor.authorMirota, D.en
dc.contributor.authorIshii, M.en
dc.contributor.authorHager, G.en
dc.date.issued2008en
dc.identifier.citationIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008: pp.1-7en
dc.identifier.isbn9781424422425en
dc.identifier.urihttp://hdl.handle.net/2440/55486-
dc.description.abstractTo correctly estimate the camera motion parameters and reconstruct the structure of the surrounding tissues from endoscopic image sequences, we need not only to deal with outliers (e.g., mismatches), which may involve more than 50% of the data, but also to accurately distinguish inliers (correct matches) from outliers. In this paper, we propose a new robust estimator, Adaptive Scale Kernel Consensus (ASKC), which can tolerate more than 50 percent outliers while automatically estimating the scale of inliers. With ASKC, we develop a reliable feature tracking algorithm. This, in turn, allows us to develop a complete system for estimating endoscopic camera motion and reconstructing anatomical structures from endoscopic image sequences. Preliminary experiments on endoscopic sinus imagery have achieved promising results.en
dc.description.statementofresponsibilityHanzi Wang; Mirota, D.; Ishii, M. and Hager, G.D.en
dc.description.urihttp://dx.doi.org/10.1109/CVPR.2008.4587687en
dc.language.isoenen
dc.publisherIEEEen
dc.titleRobust motion estimation and structure recovery from endoscopic image sequences with an adaptive Scale Kernel Consensus estimatoren
dc.typeConference paperen
dc.contributor.conferenceIEEE Conference on Computer Vision and Pattern Recognition (21st : 2008 : Anchorage, AK)en
dc.publisher.placeOnlineen
pubs.publication-statusPublisheden
Appears in Collections:Aurora harvest 5
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.