Wang, X.Shen, H.2010-06-012010-06-012009Proceedings of the 6th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD) 2009: pp.114-1189780769537351http://hdl.handle.net/2440/58619KNN as a simple classification method has been widely applied in text classification. There are two problems in KNN-based text classification: the large computation load and the deterioration of classification accuracy caused by the uneven distribution of training samples. To solve these problems, we propose a new growing LVQ method and apply it to text classification based on minimizing the increment of learning errors. Our method can generate a representative sample (reference sample) set after one phase of training of sample set, and hence has a strong learning ability. The experiment shows the improvement on both time and accuracy. For our algorithm, we also proposed a learning sequence arrangement method which performs better than others.en© 2009 IEEEText ClassificationKNNLearning Vector QuantificationReference SampleAn improved growing LVQ for text classificationConference paper002009759210.1109/FSKD.2009.3402-s2.0-7634910714134285Shen, H. [0000-0002-3663-6591] [0000-0003-0649-0648]