Efficient dense point cloud object reconstruction using deformation vector fields

Date

2018

Authors

Li, K.
Pham, T.
Zhan, H.
Reid, I.

Editors

Ferrari, V.

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Lecture Notes in Artificial Intelligence, 2018 / Ferrari, V. (ed./s), vol.11216 LNCS, pp.508-524

Statement of Responsibility

Kejie Li, Trung Pham, Huangying Zhan, Ian Reid

Conference Name

European Conference on Computer Vision (ECCV) (8 Sep 2018 - 14 Sep 2018 : Munich, Germany)

Abstract

Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of per-view loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© Springer Nature Switzerland AG 2018

License

Call number

Persistent link to this record