Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Continuous pose estimation in 2D images at instance and category levels|
|Citation:||Proceedings of the 10th International Conference on Computer and Robot Vision, 2013 / pp.121-127|
|Conference Name:||10th International Conference on Computer and Robot Vision (CRV) (29 May 2013 - 31 May 2013 : Regina, Canada)|
|Damien Teney, Justus Piater|
|Abstract:||We present a general method for tackling the related problems of pose estimation of known object instances and object categories. By representing the training images as a probability distribution over the joint appearance/pose space, the method is naturally suitable for modeling the appearance of a single instance of an object, or of diverse instances of the same category. The training data is weighted and forms a generative model, the weights being based on the informative power of each image feature for specific poses. Pose inference is performed through probabilistic voting in pose space, which is intrinsically robust to clutter and occlusions, and which we render tractable by treating separately the least interdependent dimensions. The scalability of category-level models is ensured during training by clustering the available image features in the joint appearance/pose space. Finally, we show how to first efficiently use a category-model, then possibly recognize a particular trained instance to refine the pose estimate using the corresponding instance-specific model. Our implementation uses edge points as image features, and was tested on several existing datasets. We obtain results on par with or superior to state-of-the-art methods, on both instance- and category-level problems, including for generalization to unseen instances.|
|Rights:||© 2013 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.