Gender: : Female
City : MeerutSend Friend Request
A network should have one layer of input neurons as well as one layer of output neurons, which results in at least two layers. During the assessment of linear separability at least one hidden layer of neurons, if our problem is not linearly separable (that could be, as we get seen, very likely).It is quite possible, to mathematically prove that this MLP with one hidden neuron layer already is capable of approximating arbitrary functions with any accuracy but it is necessary not only to discuss the representability of any trouble by means of a perceptron but also the learnability.
- Recurrent perceptron-like networks in Neural Networks free pdf
- A multilayer perceptron in Neural Networks free pdf
- A single layer perceptron in Neural Networks free pdf
- The perceptron, backpropagation and its variants in Neural Networks free pdf
- Comparing RBF networks and multilayer perceptrons in Neural Networks free pdf