A multi-layer network of neurons with single hidden layer can be used to represent only boolean function to any desired precision.
Answers
Answered by
0
Answer:
Multi-Layer Perceptron and Backpropagation. An MLP is composed of one input layer, one or more layers of LTUs, called hidden layers, and one final layer of LTUs called the output layer. Every layer except the output layer includes a bias neuron and is fully connected to the next layer.
Answered by
0
Boolean Function
Explanation:
- A single neuron can speak to a non-monotonous Boolean capacity
- Second, the quantity of contributions to the neural system can be diminished if the double estimations of the Boolean factors are encoded
- Boolean capacities utilizing counterfeit neural systems and calls attention to three significant outcomes. Initially, utilizing a polynomial as move work, a single neuron can speak to a non-repetitive Boolean capacity
Similar questions