Deep subspace clustering networks
Date
2017
Authors
Ji, P.
T. Zhang,
H. Li,
M. Salzmann,
Reid, I.
Editors
Guyon, I.
Luxburg, U.
Bengio, S.
Wallach, H.
Fergus, R.
Vishwanathan, S.
Garnett, R.
Luxburg, U.
Bengio, S.
Wallach, H.
Fergus, R.
Vishwanathan, S.
Garnett, R.
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Advances in neural information processing systems, 2017 / Guyon, I., Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (ed./s), vol.2017-December, pp.1-10
Statement of Responsibility
Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, Ian Reid
Conference Name
Conference on Neural Information Processing Systems (NIPS) (4 Dec 2017 - 9 Dec 2017 : Long Beach, CA)
Abstract
We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "self-expressiveness" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
Copyright status unknown