Biology, asked by KiranGill1642, 1 year ago

Significance of the hidden layer neurons in multi-layer feed forward neural networks

Answers

Answered by rks0512
0

Three sentence version:

Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity).

The hidden layers' job is to transform the inputs into something that the output layer can use.

The output layer transforms the hidden layer activations into whatever scale you wanted your output to be on.

Like you're 5:

If you want a computer to tell you if there's a bus in a picture, the computer might have an easier time if it had the right tools.

So your bus detector might be made of a wheel detector (to help tell you it's a vehicle) and a box detector (since the bus is shaped like a big box) and a size detector (to tell you it's too big to be a car). These are the three elements of your hidden layer: they're not part of the raw image, they're tools you designed to help you identify busses.

If all three of those detectors turn on (or perhaps if they're especially active), then there's a good chance you have a bus in front of you.

Neural nets are useful because there are good tools (like backpropagation) for building lots of detectors and putting them together.

Like you're an adult

A feed-forward neural network applies a series of functions to the data. The exact functions will depend on the neural network you're using: most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity. Sometimes the functions will do something else (like computing logical functions in your examples, or averaging over adjacent pixels in an image). So the roles of the different layers could depend on what functions are being computed, but I'll try to be very general.

Let's call the input vector x, the hidden layer activations h, and the output activation y. You have some function f that maps from x to h and another function g that maps from h to y.

So the hidden layer's activation is f(x) and the output of the network is g(f(x)).

Why have two functions (f and g) instead of just one?

If the level of complexity per function is limited, then g(f(x)) can compute things that f and g can't do individually.

An example with logical functions:

For example, if we only allow f and g to be simple logical operators like "AND", "OR", and "NAND", then you can't compute other functions like "XOR" with just one of them. On the other hand, we could compute "XOR" if we were willing to layer these functions on top of each other:

First layer functions:

Make sure that at least one element is "TRUE" (using OR)

Make sure that they're not all "TRUE" (using NAND)

Second layer function:

Make sure that both of the first-layer criteria are satisfied (using AND)

The network's output is just the result of this second function. The first layer transforms the inputs into something that the second layer can use so that the whole network can perform XOR.

An example

Slide 61 from this talk--also available here as a single image--shows (one way to visualize) what the different hidden layers in a particular neural network are looking for. The first layer looks for short pieces of edges in the image: these are very easy to find from raw pixel data, but they're not very useful by themselves for telling you if you're looking at a face or a bus or an elephant.

The next layer composes the edges: if the edges from the bottom hidden layer fit together in a certain way, then one of the eye-detectors in the middle of left-most column might turn on. It would be hard to make a single layer that was so good at finding something so specific from the raw pixels: eye detectors are much easier to build out of edge detectors than out of raw pixels. The next layer up composes the eye detectors and the nose detectors into faces. In other words, these will light up when the eye detectors and nose detectors from the previous layer turn on with the right patterns. These are very good at looking for particular kinds of faces: if one or more of them lights up, then your output layer should report that a face is present.


"Why are some layers in the input layer connected to the hidden layer and some are not?"

The disconnected nodes in the network are called "bias" nodes. There's a really nice explanation here. The short answer is that they're like intercept terms in regression.

"Where do the "eye detector" pictures in the image example come from?"

I haven't double-checked the specific images I linked to, but in general, these visualizations show the set of pixels in the input layer that maximize the activity of the corresponding neuron. So if we think of the neuron as an eye detector, this is the image that the neuron considers to be most eye-like. Folks usually find these pixel sets with an optimization (hill-climbing) procedure.



Similar questions