Please use this identifier to cite or link to this item:
Scopus Web of Science® Altmetric
Type: Conference paper
Title: Learning representations of ultrahigh-dimensional data for random distance-based outlier detection
Author: Pang, G.
Cao, L.
Chen, L.
Liu, H.
Citation: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp.2041-2050
Publisher: Association for Computing Machinery
Publisher Place: New York
Issue Date: 2018
ISBN: 9781450355520
Conference Name: International Conference on Knowledge Discovery and Data Mining (KDD) (19 Aug 2018 - 23 Aug 2018 : London, UK)
Statement of
Guansong Pang, Longbing Cao, Ling Chen and Huan Liu
Abstract: Learning expressive low-dimensional representations of ultrahigh-dimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distance-based detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement.
Keywords: Outlier detection; representation learning; ultrahigh-dimensional; data; dimension reduction
Rights: © 2018 Association for Computing Machinery.
DOI: 10.1145/3219819.3220042
Grant ID:
Published version:
Appears in Collections:Aurora harvest 4
Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.