Introduction

Detection Methods: part 1

Detection Methods: part 2

Detection Methods: part 3

Return to Main Page


Introduction


The study of feature extraction spans a wide range of disciplines and potential applications. Even a brief literature search turns up work done in military applications (target detection), robotics (computer vision), medical imagery (tumor detection, etc.), character recognition (including handwriting), and video effects (control point selection for warping).

A review of the literature reveals that approaches to feature extraction vary widely and many methods are quite specific to a particular application. Much of the available discussion has been published in the form of conference proceedings rather than journal articles, suggesting that feature extraction is still a young, dynamic field.

Many feature extraction problems involve the detection of a desired target despite noise in the image. The detection of a specific military vehicle, for example, generally relies upon the known, consitent form of the vehicle and must allow for degradations such as partial obscuration due to trees or clouds and varying aspect angles.

Other extraction problems tend to involve the identification of a member of a class of features with some natural variation. Such problems add the complexity of defining a class based on a membership subset and then recognizing other members by extrapolation. Handwritten character recognition is a good example of this tyoe of problem. Applications in this vein also involve the rejection of noise artifacts.

Our project involves the detection of members of a class. Specifically, we attempt to develop a method of locating the eyes in an image of a human face. Such a system would be useful in security applications and in the automatic selection of control points for digital image warping.

We explored and compared three approaches. First, we developed our own approach based on the matched filter concept. We added to the concept by contrast-enhancing our filter and pre-processing using the discrete wavelet transform to remove low-frequency (i.e., coarse-scale) edges from our image. Next, we tested a similar matched-filter type approach which uses straightforward cross-correlation with a post-processing normalization. Finally, we constructed and trained an artificial neural network to recognize the human eye.

Return to Main Page     Next Page

jchen@micro.ti.com
tdorney@ti.com
sparr@owlnet.rice.edu

Last updated on May 4, 1997
Copyright © 1997 All rights reserved.