Modular expansion of the hidden layer in Single Layer Feedforward neural Networks
Date
2016
Authors
Tissera, M.D.
McDonnell, M.D.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
International Joint Conference on Neural Networks, 2016, vol.2016-October, iss.7727571, pp.2939-2945
Statement of Responsibility
Migel D. Tissera and Mark D. McDonnell
Conference Name
International Joint Conference on Neural Networks (IJCNN) (24 Jul 2016 - 29 Jul 2016 : Vancouver, Canada)
Abstract
We present a neural network architecture and a training algorithm designed to enable very rapid training, and that requires low computational processing power, memory and time. The algorithm is based on a modular architecture, which expands the output weights layer constructively, so that the final network can be visualised as a Single Layer Feedforward Network (SLFN) with a large hidden-layer. The method does not use backpropagation, and consequently offers very fast training and very few trainable parameters in each module. It is therefore potentially a useful method for applications which require frequent retraining, or which rely on reduced hardware capability, such as mobile robots or Internet of Things (IoT). We demonstrate the efficacy of the method in two benchmark image classification datasets, MNIST and CIFAR-10. The network produces very favourable results for a SLFN on these benchmarks, with an average of 99.07% correct classification rate on MNIST and nearly 82% on CIFAR-10 when applied to convolutional features. Code for the method has been made available online.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2016 IEEE