Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/127442
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Deep point-to-subspace metric learning for sketch-based 3D shape retrieval
Author: Lei, Y.
Zhou, Z.
Zhang, P.
Guo, Y.
Ma, Z.
Liu, L.
Citation: Pattern Recognition, 2019; 96:106981-1-106981-13
Publisher: Elsevier
Issue Date: 2019
ISSN: 0031-3203
1873-5142
Statement of
Responsibility: 
Yinjie Lei, Ziqin Zhou, Pingping Zhang, Yulan Guo, Zijun Ma, Lingqiao Liu
Abstract: One key issue in managing a large scale 3D shape dataset is to identify an effective way to retrieve a shape-of-interest. The sketch-based query, which enjoys the flexibility in representing the user’s inten- tion, has received growing interests in recent years due to the popularization of the touchscreen tech- nology. Essentially, the sketch depicts an abstraction of a shape in a certain view while the shape con- tains the full 3D information. Matching between them is a cross-modality retrieval problem, and the state-of-the-art solution is to project the sketch and the 3D shape into a common space with which the cross-modality similarity can be calculated by the feature similarity/distance within. However, for a given query, only part of the viewpoints of the 3D shape is representative. Thus, blindly projecting a 3D shape into a feature vector without considering what is the query will inevitably bring query-unrepresentative information. To handle this issue, in this work we propose a Deep Point-to-Subspace Metric Learning (DPSML) framework to project a sketch into a feature vector and a 3D shape into a subspace spanned by a few selected basis feature vectors. The similarity between them is defined as the distance between the query feature vector and its closest point in the subspace by solving an optimization problem on the fly. Note that, the closest point is query-adaptive and can reflect the viewpoint information that is rep- resentative to the given query. To efficiently learn such a deep model, we formulate it as a classification problem with a special classifier design. To reduce the redundancy of 3D shapes, we also introduce a Representative-View Selection (RVS) module to select the most representative views of a 3D shape. By conducting extensive experiments on various datasets, we show that the proposed method can achieve superior performance over its competitive baseline methods and attain the state-of-the-art performance.
Keywords: Sketch-based 3D shape retrieval; cross-modality discrepancy; representative-view selection; point-to-subspace distance
Rights: © 2019 Elsevier Ltd. All rights reserved.
RMID: 1000025703
DOI: 10.1016/j.patcog.2019.106981
Appears in Collections:Electrical and Electronic Engineering publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.