Noise-Boosted Backpropagation Learning of Feedforward Threshold Neural Networks for Function Approximation

Date

2021

Authors

Duan, L.
Duan, F.
Chapeau-Blondeau, F.
Abbott, D.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

IEEE Transactions on Instrumentation and Measurement, 2021; 70:1010612-1-1010612-12

Statement of Responsibility

Lingling Duan, Fabing Duan, François Chapeau-Blondeau, and Derek Abbott, Fellow, IEEE

Conference Name

Abstract

Aiming to ensure the feasibility of the back propagation training of feedforward threshold neural networks, each hidden unit layer is designed to be composed of a sufficiently large number of hard-limiting activation functions that are excited by mutually independent external noise components and the weighted inputs simultaneously. The application of noise to nondifferentiable activation functions enables a proper definition of the gradients, and the injected noise is treated as a network parameter that can be adaptively updated by a stochastic gradient descent learning rule. This noise-boosted back propagation learning process is found to converge to a nonzero optimized level of noise, indicating that the injected noise is beneficial both for the learning and for the ensuing retrieval phase. For minimizing the total error energy of the function approximation in the designed threshold neural network, the proposed noise-boosted backpropagation learning method is proven to be better than directly injecting noise into network inputs or weight coefficients. The Lipschitz continuous property of the noise-smoothed activation function in the hidden unit layer is demonstrated to guarantee the local convergence of the learning process. Beyond the Gaussian injected noise, the optimal noise type is also numerically solved for training the designed threshold neural network. Test experiments for approximating nonlinear functions and real-world datasets verify the feasibility of this noise-boosted backpropagation algorithm in the threshold neural network. These results not only extend the analysis of the beneficial effects of noise similar to stochastic resonance and exploited here to the universal approximation capabilities of threshold neural networks, but also allow back propagation training of neural networks with a much wider family of nondifferentiable activation functions.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.

License

Call number

Persistent link to this record