Okay, it’s been a while since the last update. I coded the outline extractor as I discussed in my previous post. It works for small objects but once most of the screen or lots of little objects occur it crashes. I think its a memory allocation problem related to my use of STL vectors. Anyways It turn out that it might not be the best method to use. Especially due to the image reconstruction problem, if I just get a list of little items (possibly hundreds of them) doing a compare between all of them will be extremely expensive and not too mention quite complex as a simple vertical/horizontal distance compare wont work.
What I am now using is a recursive extraction method that extracts the entire object even if it is broken up by using a neighborhood check in each step. Simple put it searches for a white pixel in the motion frame, once it is found it creates a new object and adds the pixel’s position to the object. Then it finds that pixels neighbors which are white and adds them and for each of those neighbors it finds their neighbors that are white and adds them and so on. The trick in this algorithm is in the finding of the neighbors…
This is done by checking all the pixels around a pixel within a radius of r pixels. so for a radius of 1, it will check the 8 surrounding pixels ( the neighborhood if you will 😛 ) . A radius of 2 will increase this neighborhood to 24 pixels, 3 will make it 48 pixels and so on. By modifying this radius I can now extract whole objects which are made up of lots of closely spaced sections. The below image illustrates this technique, note that the blue dot is the starting point.
This technique extracts multiple objects reasonably quickly assuming that they are not too large. The cost of the algorithm increases dramatically when the number of separate objects is increased or the size of each object is increased. It also randomly crashes when the entire screen is white, I’m assuming it’s a memory allocation error and will look into it.
Pseudo code for the algorithm is as follows (assuming you’ve already found an initial white pixel):
using radius: calculate range (rowStart,rowEnd,colStart,colEnd) for checking for ( r = rowStart, r < rowEnd, r++ ) { for ( c = colStart, c < colEnd, c++ ) { if ( pixel(r,c) = white ) { add it to the object set pixel to gray } } } if (neighbors are found) { foreach ( neighbor found) { //recursion occurs here add its neighbors to the object } } else { return empty neighbor list }
I tried optimizing this entire procedure by avoiding the calculation of the neighborhood on each step by simply creating a massive lookup table with each position and its neighbors and just looking it up instead of calculating it but much to my surprise once i profiled the code, it was significantly slower to lookup the neighbors than to calculate them?! I have a feeling its got something to do with internal copying of vectors when I return the neighbors. I guess this weekend is gonna be spent staring at a profiler again.
Now once the extractor is complete, I have to transform the object into a suitable format to input into my neural network, i was thinking of the centroid method i described in my previous post but my dad feels it would be much better to run the object through a radon transform and using the resulting data for the neural network. I’m still a bit away from that right now but once i get there I’ll let you know know what i find out.
Update: c++ source code – recursiveExtractor.cpp
seems similar to flood fill algorithm. i used an iterative-version of floodfill algorithms using stack of point and vertical spanning.