|
ANN Introduction Feature Extraction Approach
|
Artificial Neural Networks
by Tim Dorney
Feature Extraction Approach It was determined through our test images that a 20x20 pixel area would encompass the majority of eyes. For some of the images, it was the appropriate size. For other images, the eyes were a smaller portion of the 20x20 area. For these cases, the features surrounding the eyes were also included. To generate the training sets, all test images where normalized between 0 and 1. This was required since the activation function has a range limited between 0 and 1. Eyes from all the test images were "cut out" and centered within the 20x20 grid. A 21st row was added with the first element equal to 1. A "1" represented the desired output (target) for the learning algorithm. Other 20x20 areas were also extracted from the images which were not eyes. These included eyes shifted off center, areas which contained skin tone features, and background features. The 21st row, first element, was written with a "0" in these cases. An ANN requires both positive and negative reinforcement. After the ANN was trained with 26 test patterns, to a final error of 1e-09, the test images were scanned with the ANN to determine where the network showed a high output. This should have indicated where the eyes were located. The eye's locations were identified, but larger values existed at sharp contrast areas in the images. Again, 20x20 pixels areas were "cut out" in the false positive parts of the result to add additional negative reinforcement. This improved the results. A sample of the training images are shown below.
jchen@micro.ti.com
Last updated on May 4, 1997 |
|