In this particular method of image segmentation, we focus on the idea that edges define boundaries and that regions are contained within these edges. The algorithm used in this program goes as follows:
Image Segmentation Outline
The result of the edge detection scheme left us with a grayscale image that had bright intensities for strong edges, lower intensities for weaker edges, and black for wares with no edges. Thus, the next phase of the project was to determine from the grayscale image which of the `edge' points were in fact edges. The most straight-forward method of determining which points to keep is a simple thresholding method. Points with intensities above the threshold are kept as edges and the rest are thrown out. This is a good method on the surface, but it required a little adaptation to get it to work on a wide range of images.
Due to the various types of images that might need to be segmented, we determined that this threshold value needed to be adaptive in some way. Very noisy images were coming up with too many edge points that were false, while very smooth or sparse images might no have enough edge points to create the enclosed areas necessary to grow regions. Thus we took care of that problem in a rather simplistic yet fairly effective manner. Considering the fact that most of the images we were using had been scaled down just for the ease of computation to the order of 256x256, we estimated the number of those pixels that would typically be considered `edge' pixels. From this we found that for most images, a good estimate was that 20% to 35% of the pixels were edges. Thus, we iteratively adjusted the threshold up or down accordingly until the number of edge pixels in the edge-detected image fit into this range. This worked well for most images, again leaving trouble for particularly sparse images. We added a conditional that adjusted the allowable range when the images appear to have few edges; regardless, the algorithm is totally adaptive and nothing but the input images were changed between the different trials.
After obtaining the thresholded edge image, it was necessary to filter the result to get rid of speckle noise and other small artifacts that might be mistaken for regions. To accomplish this, we determined through a series of trial and error that using two passes with a 3x3 median filter would do the trick.
Once that was done, the real fun begins. We must iteratively pass through the image somehow giving each separate region a different grayscale value. The method chosen here is to look at a pixel and ask the question, "are any of the eight nearest neighbors a shade of gray besides 0 or 1 (out of 256)." If the answer is yes, we assign the pixel being examined the same shade of gray as it's colored neighbor. If not, we assign it a random shade of gray. Giving it a random shade of gray reduces the chance that neighboring regions will be given similar values so that there will be contrast and the regions will be easily distinguishable. We pass through the image four times, beginning a pass in each of the four corners.
Finally now that the region determination portion is done, all that remains is to get rid of the edge pixels that are separating the regions. We considered some sort of algorithm that looked at the original image and tried to figure out which side of the edge it should belong to, but because the computational intensity to do such a thing is so large, we determined that the easiest way would be to simply merge the edges into all the regions that they touched until all the edge pixels were gone.
One of the things we could do to make this algorithm better would be to develop a more optimal edge detection scheme that leaves thinner edges than the gradient detector used but also would be more resistant to noise than a Laplacian edge detector. One possibility would be to implement a multiscale edge detector to solve that problem. Another improvement would be to find a better thresholding scheme than setting a range that the number of `edge' pixels has to fall into. Again, the multiscale edge detector might eliminate the need for much thresholding. Also in the future, we could try to combine the edge detection results found here with some region growing results done independently (such as the region growing algorithm also discussed in this report) in order to optimize the region determination process. Finally, it would be nice to add a texture/color recognition feature to this algorithm so that similar yet disjoint regions could be classified as `the same' and assigned the same gray-level value.
Back to Main Page