Introduction

Image segmentation is a useful tool in many realms including industry, health care, astronomy, and various other fields. Segmentation in concept is a very simple idea. Simply looking at an image, one can tell what regions are contained in a picture. Is it a building, a person, a cell, or just simply background? Visually it is very easy to determine what is a region of interest and what is not. Doing so with a computer algorithm on the other hand is not so easy. How do you determine what defines a region? What features distinguish one region from another? What determines how many regions you have an a given image?

There are a few main points that are important to consider when trying to segment an image. You must have regions that are disjoint because a single point cannot be contained in two different regions. The regions must span the entire image because each point has to belong to one region or another. To get regions at all, you must define some property that will be true for each region that you define. To ensure that the regions are well defined and that they are indeed regions themselves and not several regions together or just a fraction of a single region, that property cannot be true for any combination of two or more regions. If these criteria are met, then the image is truly segmented into regions. This paper discusses two different region determination techniques: one that focuses on edge detection as its main determination characteristic and another that uses region growing to locate separate areas of the image.


Back to Main Page