Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Simultaneous monocular 2D segmentation, 3D pose recovery and 3D reconstruction
Author: Prisacariu, V.
Segal, A.
Reid, I.
Citation: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012 / K. M. Lee, Y. Matsushita, J. M. Rehg, Z. Hu (eds.):593-606
Publisher: Springer-Verlag
Publisher Place: Berlin Heidelberg
Issue Date: 2013
ISBN: 9783642373305
ISSN: 0302-9743
Conference Name: Computer Vision (2012 : Daejeon, Korea)
Statement of
Victor Adrian Prisacariu, Aleksandr V. Segal, and Ian Reid
Abstract: We propose a novel framework for joint 2D segmentation and 3D pose and 3D shape recovery, for images coming from a single monocular source. In the past, integration of all three has proven difficult, largely because of the high degree of ambiguity in the 2D - 3D mapping. Our solution is to learn nonlinear and probabilistic low dimensional latent spaces, using the Gaussian Process Latent Variable Models dimensionality reduction technique. These act as class or activity constraints to a simultaneous and variational segmentation – recovery – reconstruction process. We define an image and level set based energy function, which we minimise with respect to 3D pose and shape, 2D segmentation resulting automatically as the projection of the recovered shape under the recovered pose. We represent 3D shapes as zero levels of 3D level set embedding functions, which we project down directly to probabilistic 2D occupancy maps, without the requirement of an intermediary explicit contour stage. Finally, we detail a fast, open-source, GPU-based implementation of our algorithm, which we use to produce results on both real and artificial video sequences.
Rights: © Springer-Verlag Berlin Heidelberg 2013
RMID: 0020131113
DOI: 10.1007/978-3-642-37331-2_45
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.