Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Modular expansion of the hidden layer in Single Layer Feedforward neural Networks|
|Citation:||2016 Intemational Joint Conference on Neural Networks (IJCNN), 2016 / vol.2016-October, pp.2939-2945|
|Series/Report no.:||IEEE International Joint Conference on Neural Networks (IJCNN)|
|Conference Name:||International Joint Conference on Neural Networks (IJCNN) (24 Jul 2016 - 29 Jul 2016 : Vancouver, Canada)|
|Migel D. Tissera and Mark D. McDonnell|
|Abstract:||We present a neural network architecture and a training algorithm designed to enable very rapid training, and that requires low computational processing power, memory and time. The algorithm is based on a modular architecture, which expands the output weights layer constructively, so that the final network can be visualised as a Single Layer Feedforward Network (SLFN) with a large hidden-layer. The method does not use backpropagation, and consequently offers very fast training and very few trainable parameters in each module. It is therefore potentially a useful method for applications which require frequent retraining, or which rely on reduced hardware capability, such as mobile robots or Internet of Things (IoT). We demonstrate the efficacy of the method in two benchmark image classification datasets, MNIST and CIFAR-10. The network produces very favourable results for a SLFN on these benchmarks, with an average of 99.07% correct classification rate on MNIST and nearly 82% on CIFAR-10 when applied to convolutional features. Code for the method has been made available online.|
|Rights:||© 2016 IEEE|
|Appears in Collections:||Electrical and Electronic Engineering publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.