Pattern recognition and image segmentation in connectomics

In dealing with microscopy images, one can imagine many different possible workflows. In their article on biological pattern recognition from images, Shamir et al suggest this:

ROI = region of interest, such as an area of the brain containing neurons you'd like to study; doi:10.1371/journal.pcbi.1000974

The above is very general but still useful for visualizing how the process might go. One of the cases they discuss is segmenting images. Serial section electron microscopy (EM) images of neural tissue slices are a good test cast for image segmentation techniques in general, because there is a “ground truth” that we are interested in determining, without much ambiguity. A neurite either belongs to a given neuron or it does not. (Light microscopy tends to be more probabilistic, on the other hand, due to current resolution problems).

Jain et al’s article discusses evolving techniques for image segmentation in serial section EM images of neural tissue. Two issues they elucidate are:

1) Metrics for a “good” segmentation. Typically computer-derived segmentations are compared to those of humans, which are taken as bona fide correct (and this can be tested via inter-rater reliability). But even then, it’s hard to say how to compare the images.

One naive attempt is to sum the number of pixels for which the computer boundary labelings deviate from the human’s (the “pixel error”). But this does not take into account how important the errors are: whether they affect the actual number and/or topology of neurites.

So, new metrics have been proposed. One is the warping error. This first attempts to move each pixel in the human-generated boundary map towards the pixels in the computer-generated boundary map, without changing the topology of the image. Only then does it calculate the pixel error. This allows the metric to distinguish highly deleterious topology errors of neurites from less important distance errors.

Another is the Rand error, which relies on determining whether given pixels are connected in the right way in the computer-generated image.

Perhaps the ultimate metric would be some weighted combination of the above. But that might be too computationally expensive.

2) Machine learning (ML). ML has outperformed traditional algorithm-based methods in a number of tests. (Traditional algorithms are so 90’s). It can be done in one of two (or more) ways, with the key step being how the features of the images are selected. A feature in microscopy data generally is something like a spatial gradient of brightness, color, and/or texture.

In the first approach, these features are chosen manually (with a fair amount of help, e.g. see this pdf). This vector is then transformed mathematically into the boundaries by some algorithm. One such algorithm is a multi-layer perceptron, which is, kind of ironically but mostly confusingly, a neural network-based algorithm.

In the second approach, the features are chosen by the computer as well, in a process called “end-to-end” learning. On the spectrum of micromanagement to objective-based learning, this is about as far to the objective-based side as one can imagine. One such technique is a convolutional network.

At the end of the article they speculate on the future of image segmentation. They believe that the research will go in phases, where researchers will extract as much info as they can out of the current field of view (the number of neighboring pixels whose info is considered) and then expand the field of view, and then repeat. Whenever I hear that research will go in phases, I wonder: why not just “jump” right now to what we currently believe will be the last phase?

But there are non-trivial reasons why we cannot do that, including exponential increases in computational power needed as you increase the field of view, and the fact that the features and algorithms you need to use in those larger fields will differ in non-trivial ways. Still, it is an exciting time to be involved in image segmentation research, and we’ll continue to track its progress and applications to neural em images here.

References

Jain V, et al. 2010 Machines that learn to segment images: a crucial technology for connectomics. Curr Opin Neurobiol 10.1016/j.conb.2010.07.004. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2975605/?tool=pubmed

Shamir L, Delaney JD, Orlov N, Eckley DM, Goldberg IG (2010) Pattern Recognition Software and Techniques for Biological Image Analysis. PLoS Comput Biol 6(11): e1000974. doi:10.1371/journal.pcbi.1000974