Feeds:
Posts
Comments

Archive for the ‘Brain Imaging’ Category

Humans seem to have developed dedicated systems for detecting the prototypical gait of moving animals. One paradigm for operationalizing this ability is a point light display, which simulates animals moving in the dark with just a few lights on their joints.

We are able to classify these sparse moving points as biological motion and can often even make inferences about the characteristics of the moving agent. See for yourself in this 31 s video:

Previous studies have indicated that toddlers with autism have deficits in perceiving biological motion. This is not surprising, because social information is embedded within the stimuli.

Kaiser et al took this further by using this point light display paradigm and fMRI on 1) children with ASD, 2) siblings of children with ASD, and 3) control children.

They looked for regions differentially activated between biological light displays and scrambled light displays. They then compared the degree of differential neural activity between groups.

Brain regions were classified as having 1) less differential activation in ASD children in biological conditions as compared to siblings and controls (orange below), 2) less differential activation in ASD children and siblings as compared to controls (yellow), 3) enhanced differential activation in siblings (green), or 4) no statistically significant difference in differential activation between groups (uncolored).

top = sagittal slice; middle = coronal; bottom = axial; doi: 10.1073/pnas.1010412107

Their approach helps tease out the neural circuits underlying why some individuals with genetic risk factors don’t develop ASD. The two main brain regions they implicated were the vmPFC (of emotional decision making fame) and the right posterior STS. Could we imagine some study attempting to stimulate these regions in a model of ASD to mimic the development of compensatory mechanisms?

Reference
Kaiser M, et al. 2010 Neural signatures of autism. PNAS doi:10.1073/pnas.1010412107.

Read Full Post »

Decreases in brain volume seen via MRI are considered to be signs of decreased synapse density, neuron loss, and cell shrinkage. Tondelli et al. hypothesized that healthy subjects who would develop AD by up to 10 years later would show reduced brain volume in certain brain areas in an MRI at baseline.

Specifically, via voxel-based morphometry, they found greater brain volume in certain regions in individuals who did not develop AD, with the largest differences seen in the medial temporal lobe (hippocampus and amydgala). They also report a significant correlation between the degree of bilaterial temporal lobe atrophy and a measure of cognitive impairment, although it would have been nice to see the actual coefficient.

Perhaps the most interesting part of their study is when the authors used image segmentation of subcortical regions with standardized vertices to compare across subjects. This allowed them to find shape differences in the right hippocampus that could distinguish (> 93% classification accuracy) between subjects who converted to AD from those that remained healthy. This kind of measure could allow for greater insight into the mechanisms of deterioration at play.

Reference

Tondelli M, et al. 2011 Structural MRI changes detectable up to ten years before clinical Alzheimer’s disease. Neurobio Aging. PubMed.

Read Full Post »

Last week the Allen institute released their new atlas with lots of microarray data localized in different regions of human brains. It comes in the form of free software called the “Brain Explorer 2,” which takes awhile to download but seems to be worth it. Here is a screenshot of the user interface:

The two brains correspond to their two donors, showing some of the variability in brain structure. The maps were created using structural MRI and diffusion tensor imaging.

Note that you can choose certain regions of the brain to be displayed or not by clicking on that region in the menu to the right. You can rotate the brain relative to any orientation by clicking and dragging the human head in the upper right corner. If nothing else this makes it a useful tool for learning brain anatomy. Check it out.

Read Full Post »

Cardona et al describe their research towards the Drosophila connectome here. One advantage of studying Drosophila is its lineage tracts, which are groups of neurons in discrete compartments of the brain that differentiated from the same neuroblast early in development. These allow groups of synapses to be broken up into lineage tracts by light microscopy, and analyzed individually. In the following cross section of one hemisphere, one of these lineage tracts is shown in purple:

 

glial cells = green, form boundaries around neuropile compartments

 

They aimed for a resolution of 3-4 nm / pixel, which allows them to image synaptic vesicles with diameters of 30-40 nm. These are indicated by arrows in the EM image below:

 

scale bar = 350 nm, arrow = presynaptic vesicles, arrowhead = oblique microtubule; doi:10.1371/journal.pbio.1000502

 

For data analysis, they define a “hierarchy” of objects based on their relationship to one another. This hierarchy ranges from “neuropile compartments” to “neuropile tracts” to “lineages” to “neurons” to “primary branches of a neuron” to “secondary branches of a primary branch.” Branches connect to other neurons at synapses, of course.

One advantage of the hierarchical object classification is that it will eventually allow the researchers to upload their data sets to neuron modeling programs such as neuron, to simulate the firing of their neurons and try to understand what kind of output the network might produce.

Next, they classify the neural projections (dendrites + axons) they image into four types:

  • axiform (“A,” straight, unbranched, 0.2 – 0.4 μm);
  • varicose (“V,” branched, thick are 0.5 – 1.5 μm, thin are 0.15 – 0.4 μm);
  • globular (“G”, variant of varicose with large swellings > 1.5 μm at points, often form end points of axons);
  • dendritiform (“D,” highly branched, often change direction, and < 0.2 μm).

Below are generic models for each, followed by representative 3d digital reconstructions of each type, followed by a 3d digital reconstruction that includes all four types of projections:

 

red dots = presynaptic sites; segments for reconstruction from the ventral nerve cord microvolume; doi:10.1371/journal.pbio.1000502

 

Although the same types of projections show up across brain regions, their relative number, direction, and the placement of synapses along them differ between regions. These can form network motifs, which is used in this context to indicate a recurring form, shape, or figure in a design.

The most common type of network motif they found they called a “dense overlapping regulatory motif”, in which each axon has multiple different targets and each dendrite has multiple different inputs. Less commonly, they found feed forward motifs, in which one projection (neurite) receives input from a second neurite, and both provide output to a common target. Here are schematics of both:

 

feed forward motif = left; dense overlapping regulon motif = right; doi:10.1371/journal.pbio.1000502

 

The dense overlapping motif demonstrates how convoluted the Drosophila neural connections can be. In the following example, one presynaptic neurite (“a”, bright green) contacts 12 postsynaptic “dendritiform” neurites (yellow-brown) at 3 synapses. Concomitantly, 7 other presynaptic elements (transparent green) contact these same postsynaptic dendritiform neurites. The following shows a representative brain section of the motif as well as front and side views of their reconstructed 3d model:

 

dense overlapping regulon motif; doi:10.1371/journal.pbio.1000502

 

On the other hand, the feed forward motif underscores how neural signals can be amplified and recombine in a hierarchical manner. In this example, a globular neurite (G) is postsynaptic to a varicose neurite segment (V). Concomitantly, a dendritiform neurite (D, orange) is postsynaptic to both the globular neurite (G) and the varicose neurite (V). Here are two representative brain segments of this motif from the ventral nerve cord cube and a lateral view of the 3d reconstruction:

arrow in K / L = varicose neurite V; arrowhead in J = globular neurite G; J and K in L indicate levels of brain sections used to reconstruct model; doi:10.1371/journal.pbio.1000502

Not only does this paper have good info about Drosophila neural architecture, but it also shows how researchers are still iterating towards that connectome…

Reference: Cardona A, Saalfeld S, Preibisch S, Schmid B, Cheng A, et al. (2010) An Integrated Micro- and Macroarchitectural Analysis of the Drosophila Brain by Computer-Assisted Serial Section Electron Microscopy. PLoS Biol 8(10): e1000502. doi:10.1371/journal.pbio.1000502.

Read Full Post »

Towards a Drosophila connectome

Chklovskii et al recently published a paper in Current Opinion in Neurobiology that discusses the problem of attempting to build a map of neural connections (a “connectome”) in the fruit fly. Here is a sweet image of their broad plan for attacking the problem:

Reconstruction pipeline at HHMI's Janelia Farm; red circle in top image is a synapse; doi:10.1016/j.conb.2010.08.002

Here’s how I understand how they reconstruct the sections: First the images are mapped globally using a software program like TrakEM2. Then, individual images are overlapped pairwise by different software on the basis of, e.g., image correlation. These overlapping images are transformed or “connected” together. Finally, transformed images are fit into a global coordinate system using the least squares method of linear regression.

This process is almost exactly how one might solve a jigsaw puzzle, if the puzzle was in 3d, there was no guarantee that you had all of the necessary images, and the results would be one of the most important breakthroughs in neuroscience in some time.

Once the images are built into a 3d image, they need to be segmented into biologically interesting portions, such as the axons and the synapses. This involves the choice of various algorithms for boundary detection, which are improving as the world pours more and more research into image recognition in general. Many of these approaches use the Rand index, which measures similarity between adjacent pixels.

Although it is less sexy, researchers still proofread and annotate these connections manually, and there is software designed specifically for that. Perhaps they could jump on the bandwagon of crowdsourcing and formulate this problem as a public and interactive computer game.

The researchers are currently in the processing of reconstructing 250 columns in the medulla neuropile optic lobe that fills a volume of 90 μm × 90 μm × 80 μm. One column of 250 has so far been reconstructed, and they note that it took two “person years” to do so. So let’s hope that we can get some Moore’s law-like advancement going in the area of neural circuit reconstruction!

Reference

Chklovskii DB, Vitaladevuni S, Scheffer LK. 2010 Semi-automated reconstruction of neural circuits using electron microscopy. Curr Opin Neurobiol. doi:10.1016/j.conb.2010.08.002

Read Full Post »

Saalfeld et al have a cool new paper (here) explaining their algorithm for registration (i.e. proper stacking) of serial section transmission electron microscopy images. Microscopy instruments alone are too imprecise for seamless stitching of the images, leaving some images rotated and out of place. So, computer tech is needed to help for automation.

Their algorithm extracts landmarks (e.g. “blobs”), from all section micrographs (“tiles”), identifies landmark correspondences between tiles, and estimates the tile configuration that minimizes (via the function arg min) the sum of all the squared correspondence displacements between landmarks. In the demonstration below, feature candidates are circled, with size proportional to the feature’s scale. On bottom, two correspondence matches are shown as an example:

False matches = white, true matches = black; doi:10.1093/bioinformatics/btq219

They tested their algorithm on 6000+ Drosophila larval brain 60 nm sections imaged at 4.68 nm per pixel resolution. They found that their reconstruction yields continuity of structures such as axon bundles within and across image sections. They conclude that “globally optimal reconstruction of entire brains on TEM level will enable registration of 3D light microscopy data onto electron microscopy volumes. By that it will be possible to establish the connection between brain macro (neuronal lineages) and micro (synaptic connectivity) circuitry.” Exciting stuff.

Reference

Saalfeld S, et al. 2010 As-rigid-as-possible mosaicking and serial section registration of large ssTEM datasets. Bioinformatics. doi:10.1093/bioinformatics/btq219.

Read Full Post »

Just et al have an interesting paper (here) using brain images to predict what noun participants were looking at (for 3 s) in their visual field. Importantly, they did not see an actual picture of an object based on the word, although they were instructed to think about what qualities the word connotes. Their model takes into account four factors of a word to predict its brain activation: manipulation, shelter, eating, and word length. Here’s a tantalizing picture of one participant’s expected and actual brain activations upon seeing the words “apartment” and “carrot.”

coronal slice in the parahippocampal area (dark blue ellipse) and precuneus area (light blue ellipse); doi:10.1371/journal.pone.0008622

The real test of any true model is in prediction. For data within one participant, in answering the question “What will the activation patterns be for these two new words, given the relation between word properties and activation patterns for the other 58 words?” the model had a mean accuracy of 0.801. Still within one participant, in answering the question “Which of the 60 words produced this activation pattern, given information from an independent training set?” the model had a mean accuracy of 0.724. These accuracies are far, far above chance.

Even across participants, the model was accurate. In generating predictions concerning two previously unseen words for a previously unseen participant (from training data of the 10 other participants and 58 other words), the model had a mean accuracy of 0.762. Possibly brain imaging needs a Turing test to decide exactly what would be required to say that researchers can “read minds” in an fMRI machine? I would say, > 98% accuracy for what word subjects focus on when they decide to focus on a word, out of all possible dictionary words, would be pretty close to mind reading to me. Or maybe they don’t have to actually guess the word entirely correct but just get its meaning down in terms of various components? All I know is, it will probably be easy to tell when people have the munchies, because there will be a lot of activation in the “eating” factor.

Reference

Just MA, Cherkassky VL, Aryal S, Mitchell TM (2010) A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes. PLoS ONE 5(1): e8622. doi:10.1371/journal.pone.0008622

Read Full Post »

Older Posts »