Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/64195
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | A gradient-based metric learning algorithm for k-NN classifiers |
Author: | Zaidi, N. Squire, D. Suter, D. |
Citation: | AI 2010: Advances in Artifical Intelligence: 23rd Australasian Joint Conference, Adelaide, Australia, December 2010: Proceedings / Jiuyong Li (ed.): pp. 194-203 |
Publisher: | Springer Verlag |
Publisher Place: | Netherlands |
Issue Date: | 2010 |
Series/Report no.: | Lecture notes in Computer Science ; 6464 |
ISBN: | 3642174310 9783642174315 |
ISSN: | 0302-9743 1611-3349 |
Conference Name: | Australasian Joint Conference on Artificial Intelligence (23rd : 2010 : Adelaide, Sth. Aust.) |
Editor: | Li, J.Y. |
Statement of Responsibility: | Nayyar Abbas Zaidi, David McG. Squire and David Suter |
Abstract: | The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongst the most widely applied and well studied techniques for pattern recognition in machine learning. A drawback, however, is the assumption of the availability of a suitable metric to measure distances to the k nearest neighbors. It has been shown that k-NN classifiers with a suitable distance metric can perform better than other, more sophisticated, alternatives such as Support Vector Machines and Gaussian Process classifiers. For this reason, much recent research in k-NN methods has focused on metric learning, i.e. finding an optimized metric. In this paper we propose a simple gradient-based algorithm for metric learning. We discuss in detail the motivations behind metric learning, i.e. error minimization and margin maximization. Our formulation differs from the prevalent techniques in metric learning, where the goal is to maximize the classifier's margin. Instead our proposed technique (MEGM) finds an optimal metric by directly minimizing the mean square error. Our technique not only results in greatly improved k-NN performance, but also performs better than competing metric learning techniques. Promising results are reported on major UCIML databases. © 2010 Springer-Verlag. |
Rights: | © Springer-Verlag Berlin Heidelberg 2010 |
DOI: | 10.1007/978-3-642-17432-2_20 |
Published version: | http://dx.doi.org/10.1007/978-3-642-17432-2_20 |
Appears in Collections: | Aurora harvest Computer Science publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.