WHY CONSIDER CONNECTIONISM IN WRITING A TEXT?
Answers
Answer:
Explanation:
Connectionism is a movement in cognitive science that hopes to explain intellectual abilities using artificial neural networks (also known as “neural networks” or “neural nets”). Neural networks are simplified models of the brain composed of large numbers of units (the analogs of neurons) together with weights that measure the strength of connections between the units. These weights model the effects of the synapses that link one neuron to another. Experiments on models of this kind have demonstrated an ability to learn such skills as face recognition, reading, and the detection of simple grammatical structure.
Philosophers have become interested in connectionism because it promises to provide an alternative to the classical theory of the mind: the widely held view that the mind is something akin to a digital computer processing a symbolic language. Exactly how and to what extent the connectionist paradigm constitutes a challenge to classicism has been a matter of hot debate in recent years.
1. A Description of Neural Networks
2. Neural Network Learning and Backpropagation
3. Samples of What Neural Networks Can Do
4. Strengths and Weaknesses of Neural Network Models
5. The Shape of the Controversy between Connectionists and Classicists
6. Connectionist Representation
7. The Systematicity Debate
8. Connectionism and Semantic Similarity
9. Connectionism and the Elimination of Folk Psychology
10. Predictive Coding Models of Cognition
11. Deep Learning: Connectionism’s New Wave
Bibliography
Academic Tools
Other Internet Resources
Related Entries
1. A Description of Neural Networks
A neural network consists of large number of units joined together in a pattern of connections. Units in a net are usually segregated into three classes: input units, which receive information to be processed, output units where the results of the processing are found, and units in between called hidden units. If a neural net were to model the whole human nervous system, the input units would be analogous to the sensory neurons, the output units to the motor neurons, and the hidden units to all other neurons.
Here is a simple illustration of a simple neural net:
a diagram of three columns with the first having seven circles, the second having four circles, and the third having three circles. Each circle in the first column is connected to each circle in the second by a line. Each circle in the second column is connected to each circle in the third column by a line.
Each input unit has an activation value that represents some feature external to the net. An input unit sends its activation value to each of the hidden units to which it is connected. Each of these hidden units calculates its own activation value depending on the activation values it receives from the input units. This signal is then passed on to output units or to another layer of hidden units. Those hidden units compute their activation values in the same way, and send them along to their neighbors. Eventually the signal at the input units propagates all the way through the net to determine the activation values at all the output units.
The pattern of activation set up by a net is determined by the weights, or strength of connections between the units. Weights may be either