Zhao, X.Li, X.Zhang, Z.Shen, C.Zhuang, Y.Gao, L.Li, X.2017-10-222017-10-222016IEEE Transactions on Neural Networks and Learning Systems, 2016; 27(12):2628-26422162-237X2162-2388http://hdl.handle.net/2440/108835Visual feature learning, which aims to construct an effective feature representation for visual data, has a wide range of applications in computer vision. It is often posed as a problem of nonnegative matrix factorization (NMF), which constructs a linear representation for the data. Although NMF is typically parallelized for efficiency, traditional parallelization methods suffer from either an expensive computation or a high runtime memory usage. To alleviate this problem, we propose a parallel NMF method called alternating least square block decomposition (ALSD), which efficiently solves a set of conditionally independent optimization subproblems based on a highly parallelized fine-grained grid-based blockwise matrix decomposition. By assigning each block optimization subproblem to an individual computing node, ALSD can be effectively implemented in a MapReduce-based Hadoop framework. In order to cope with dynamically varying visual data, we further present an incremental version of ALSD, which is able to incrementally update the NMF solution with a low computational cost. Experimental results demonstrate the efficiency and scalability of the proposed methods as well as their applications to image clustering and image retrieval.en© 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.Feature learning; nonnegative matrix factorization (NMF); online algorithm; parallel computingScalable linear visual feature learning via online parallel nonnegative matrix factorizationJournal article003005745710.1109/TNNLS.2015.24992730003889196000142-s2.0-850010966902-s2.0-84949818160275848Shen, C. [0000-0002-8648-8718]