|
ANN Introduction Feedforward Network
|
Artificial Neural Networks
by Tim Dorney
Feedforward Network Several models of artificial neural networks have been suggested by various researchers. Each of the architectures offers a method of connecting the inputs and outputs of each of the processing elements. In the feedforward architecture, there are a series of layers of interconnected processing elements. Processing elements that are not directly connected to either the input or output are considered "hidden". (The input layer is considered to be composed of linear activation functions.) The number of processing elements in the hidden layers are not constrained. It has been recently shown, however, that one hidden layer is sufficient for learning a nonlinear function with arbitrary accuracy [Hornik, etal.] [Cybenko].
Figure 3. A Schematic of a Semi-Linear Feedforward Connectionist Network In reference to Figure 3, the input layer is a set of linear nodes which serve as a fan-out for the information contained in an input pattern. The output and hidden layers contain "sigmoidal" processing elements like those previously described. The output from each of the hidden layers are linked to each of the output layer nodes, which generates the output pattern. This architecture is fundamental for supervised learning, but an inordinate amount of time is usually required to reach an acceptable result. Improvements have been demonstrated by a modification of the architecture in the Functional-Link Net.
jchen@micro.ti.com
Last updated on May 3, 1997 |
|