[PDF][PDF] Transfer functions: hidden possibilities for better neural networks.

W Duch, N Jankowski - ESANN, 2001 - is.umk.pl
Sigmoidal or radial transfer functions do not guarantee the best generalization nor fast
learning of neural networks. Families of parameterized transfer functions provide flexible …

[PDF][PDF] Survey of neural transfer functions

W Duch, N Jankowski - Neural computing surveys, 1999 - fizyka.umk.pl
The choice of transfer functions may strongly influence complexity and performance of
neural networks. Although sigmoidal transfer functions are the most common there is no a …

Combination of supervised and unsupervised learning for training the activation functions of neural networks

I Castelli, E Trentin - Pattern Recognition Letters, 2014 - Elsevier
Standard feedforward neural networks benefit from the nice theoretical properties of
mixtures of sigmoid activation functions, but they may fail in several practical learning tasks …

[PDF][PDF] New neural transfer functions

W Duch, N Jankowski - 1997 - zbc.uz.zgora.pl
Adaptive systems of the Artificial Neural Network (ANN) type (Haykin, 1994) were initially
motivated by the parallel-processing capabilities of the real brain, but the processing …

[PDF][PDF] Feed forward neural networks with random weights

WF Schmidt, MA Kraaijveld… - … conference on pattern …, 1992 - researchgate.net
In the field of neural network research a number of spectacular experiments are described,
which seem to be in contradiction with the classical pattern recognition or statistical …

A survey on modern trainable activation functions

A Apicella, F Donnarumma, F Isgrò, R Prevete - Neural Networks, 2021 - Elsevier
In neural networks literature, there is a strong interest in identifying and defining activation
functions which can improve neural network performance. In recent years there has been a …

Fundamentals of machine learning

KL Du, MNS Swamy, KL Du, MNS Swamy - Neural networks and statistical …, 2014 - Springer
Fundamentals of Machine Learning | SpringerLink Skip to main content Advertisement
SpringerLink Account Menu Find a journal Publish with us Track your research Search Cart …

Learning and generalization in overparameterized neural networks, going beyond two layers

Z Allen-Zhu, Y Li, Y Liang - Advances in neural information …, 2019 - proceedings.neurips.cc
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Page 1 Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two …

[PDF][PDF] RADIAL, E ASIS FUNCTION NETVVORKS

K Hlaváčková, R Neruda - Neural Network World, 1993 - researchgate.net
An overview of feedforward networks with one hidden layer with Ra-dial Basis Function
(RBF) units is presented. The learning process of this type of network can take advantage of …

Assessing the impact of input features in a feedforward neural network

W Wang, P Jones, D Partridge - Neural Computing & Applications, 2000 - Springer
For a variety of reasons, the relative impacts of neural-net inputs on the output of a network's
computation is valuable information to obtain. In particular, it is desirable to identify the …