ANN Introduction

Neural Processing Element

Feedforward Network

Function-Link Net

Learning Algorithm

Feature Extraction Approach

Results

ANN Conclusions

References

Return to Main Page

Artificial Neural Networks

by Tim Dorney

Introduction

Throughout history, the brain has been the focus for a diverse class of researchers including psychologists, philosophers, biologists, mathematicians, computer scientists and engineers. Each has tried to distill some aspect of the brain's characteristics into a formal model. Recently, research has made a unique advance in this area with the introduction of artificial neural networks (ANN). ANNs offer an ability to learn which is unprecedented using conventional means. By using ANNs, a model free estimate of the environment can be established which provides both an adaptive and robust system. One drawback to the technology, however, is the considerable time necessary for training.

ANNs are massively parallel arrays of simple, interconnected processing elements. The objective of research is to explore the functionality of a biological information processing system, so that similar characteristics can be exhibited in artificial networks. There is a sharp contrast, however, between true biological systems and their artificial counterparts. In a biological system, the neural activity is highly parallel with millions of neurons (processing elements) activated at any given instance. ANNs, although able to process information several orders of magnitude faster than biological systems, are limited by the number of processing elements and serial programming required by current computer technology. These limitations have caused research to focus on the issues of ANN architectures and learning schemes.

To date, four categories of neural net computing have been identified. They are supervised learning, self-organization, optimization and associative memory. These topics are covered by both Pao and Takefuji . Supervised learning, the primary interest in this work, is a method of teaching a neural network to map a set of input patterns to a known set of output patterns. In essence, ANNs can provide a nonlinear, functional mapping from one space to another.

ANNs have the unique quality that inferences can be made from information with little or no understanding of the origins of the supplied patterns. This ability makes ANNs applicable when traditional techniques can not accurately model their environment. Also, ANNs can adjust to the introduction of new patterns which allows them to not only learn but to adapt. Various applications such as signal processing, pattern recognition and control have all employed ANNs to enhance the quality of results.

Back to Main     Next Page

jchen@micro.ti.com
tdorney@ti.com
sparr@owlnet.rice.edu

Last updated on May 3, 1997
Copyright © 1997 All rights reserved.