ANN Introduction

Neural Processing Element

Feedforward Network

Function-Link Net

Learning Algorithm

Feature Extraction Approach

Results

ANN Conclusions

References

Return to Main Page


Artificial Neural Networks

by Tim Dorney

Functional-Link Net

One of the architectures which has shown fast learning ability is the Functional-Link Net. Sobajic, Lee, and Zwingelstein have all demonstrated numerous examples of its abilities. Theoretical foundations of FLN can be found in Pao and Hornik. Given the general neural network structure with multiple hidden layers, Hornik introduced a squashing function which removes all hidden layers except one. Pao has expanded this argument by bringing the hidden layer inputs to the input of the network. In Hornik's work, the final expression for the sum of the inputs to the output layer processing elements is

net_k = w_k1*sigma_1 + w_k2*sigma_2 +...+ w_kj*sigma_j +...+ 
        w_kJ*sigma_J                                                 (6)

where

sigma_j = F{(w_j1*o_1 + theta_j) + (w_j2*o_2 + theta_j) +...+ 
          (w_ji*sigma_i + theta_j) +...+ (w_jI*o_I + theta_j)}       (7)

where o_i is the input to the network, w_ji is the weight from the network input to the hidden processing elements, theta_j is the threshold weight of the hidden processing elements, f{x} is the activation function for the hidden layer nodes and w_kj is the weight from the output of the hidden layer to the output layer. Pao has shown that the output of the hidden layer can be left to an "arbitrarily" assigned value as opposed to being calculated from Hornik's squashing function. The advantages are obvious when the time to learn a system is based on the number of processing elements that need to be computed.

[Text if no grafic]

Figure 4. A Schematic Depiction of a Functional-Link Net

In Figure 4, the hidden layer outputs have been moved to the network inputs. An investigation to remove the ad hoc selection of the new input terms has lead to a class of functional enhancements. Joint activation terms are the multiplication of various input pattern terms to form the new inputs. Orthogonal sets using sine and cosine functions on the input pattern terms allow another method to define the functional enhancements.

Research continues to investigate various structures in an effort to optimize ANNs toward faster and more efficient learning. The future of ANNs will lead into an area of structural neural network analysis. This area will attempt to define the architecture, selection of functional enhancements and activation functions which are necessary for a given problem.

Previous Page     Next Page

jchen@micro.ti.com
tdorney@ti.com
sparr@owlnet.rice.edu

Last updated on May 3, 1997
Copyright © 1997 All rights reserved.