Combining x-ray tomography with serial section transmission electron microscopy

Haberthur et al wanted to image gold particles (200 and 700 nm) in rat lungs. In order to do so they performed vascular perfusion on the tissue samples and shaped them with a watchmakers lathe. They then did x-ray tomographic microscopy at a wavelength of 11.5 kilo electron volts, yielding the pixel resolution of 350 nm by 350 nm by 350 nm. This allows for volumetric analysis so that a computer program can reconstruct a 3D image of the tissue sample.

But since the 200 nm gold particles are smaller than the resolution of x-ray tomographic microscopy, the researchers then used transmission electron microscopy to determine the precise location of these molecules. In order to do so, they chopped up their perfused tissue samples with serial sectioning. After correcting for rotation (which tissue samples are liable to do in the microtome), they found a high degree of correlation between their recounstructed image stacks and the real slices. They were also able to track the 200 nm gold particles over a series of TEM images, as shown via the arrow in these two slices 80 nm apart:

Haberthur et al 2009

One might imagine neuro investigations using tomographic microscopy to build 3D models of a given tissue region and then confirming the precise location of individual structures (like synaptic ribbons) within the 3D space via electron microscopy.

Reference

Haberthur D, et al. 2009 Multimodal imaging for the detection of sub-micron particles in the gas-exchange region of the mammalian lung. Journal of Physics: Conference Series 186. doi:10.1088/1742-6596/186/1/012040. Link.

Three problems with quantum dots for in vivo imaging

1) They’re usually too big (~30 nm) and thus may not be able to fit in very small morphological regions such as the synaptic cleft, which are usually about 20 nm wide. One possible way to deal with this is to make the QD smaller! This may be possible to do if researchers switch from CdSe to InP as the crystal core. It is a common mistake to assume that QD’s are smaller than conventional fluorescent dyes–they are in fact 10 to 20 times larger than fluorescein isothiocyanate fluorophores.

2) Since QD’s blink, it can be difficult to track multiple molecules bound to them in a given region as they might cross over undetected. Thus sophisticated algorithms must distinguish between various QD’s. However, it is possible for each QD to emit a different fluorescent wavelength if their sizes are varied slightly, due to variations in the effects of quantum confinement. Note also that the material of the outer surface plays a large role in determining the fluorescence emitted by the same crystal core, which could possibly also be exploited to yield more variation in QD emissions.

3) The QD’s often affect the ligand characteristics of the bound antibody. If one is hoping to detect the function of some protein in typical cellular processes it will be difficult to do so if the QD-bound molecule has different activity–for example, less preferential binding to another protein–than non-QD-bound endogenous molecules.  The possibility of this needs to be carefully quantified before an experimental design assumes that it is not the case.

Despite these problems, there are some ways that QD’s could be used in vivo to detect action potentials. If they were bound to synaptically-important proteins in multiple adjacent neurons, it might be possible to track the spike trains of each neuron and how they interact after exposure to various chemical manipulations. One of the most important benefits of QD’s to this type of design is their high photostability and long lifetime in the aqueous solution of cells.

References

Alcor D, et al. 2009 Single-particle tracking methods for the study of membrane receptors dynamics. doi: 10.1111/j.1460-9568.2009.06927.

Cao YW, et al. 1999 Synthesis and Characterization of InAs/InP and InAs/CdSe Core/Shell Nanocrystals. Abstract.

Pathak, et al. 2009 Quantum Dot Labeling and Imaging of Glial Fibrillary Acidic Protein Intermediate Filaments and Gliosis in the Rat Neural Retina and Dissociated Astrocytes. doi:10.1166/jnn.2009.GR08

Single neuron resolution following post-mortem dissection

The NYT on the project to dissect and analyze H.M.’s brain:

“We’re going to get the kind of resolution, all the way down to the level of single cells, that we have not had widely available before,” said Donna Simmons… The thin whole-brain slicing “will allow much better opportunities to study the connection between cells, the circuits themselves, which we have so much more to learn about.”… “Ideally, anyone with the technology could do the same with their own specimens.”

Ho hum. What happened to the apparent controversy over the feasibility of this a few months ago? It seems that we will indeed soon have neuron by neuron maps. The question is, at which level  do we achieve scale separation? Surely we will need to go lower than the level of the neuron to capture memories that are encoded via the strength of NMDA receptors. But how much further down?

Diffusion tensor imaging to reconstruct average brain connectivity

In anisotropic tissue something interferes with the free diffusion of water molecules, for instance cell membranes or microtubles. This means that diffusion will be faster parallel to an axon and slower perpendicular to it. In DTI, the diffusion coefficient will miss these local effects and thus the diffusion coefficient will vary depending upon the orientation in which the tissue is measured. By measuring a given area of tissue (i.e., a voxel) from 6+ directions, you can describe the orientation of average axons in a vector. Following some fancy math, you can determine white matter pathways between voxels as well as connectivity probabilities.

Gong et al recently used diffusion tensor imaging on the whole brains of 80 right-handed young adults in 3-mm slice thickness (no gaps) for 40 overall slices from 6 diffusion directions with a b value of 1000 s/mm^2. Note higher b values lead to greater image contrast. They then interpolated the diffusion-weighted images into 1-mm isotropic dimensions. They partitioned the cerebral cortex into 78 cortical regions and restricted the trajectory of fiber bundles to white matter voxels to evaluate their connectivity to the adjacent cortical region. They then counted number of fiber bundles connecting each pair of regions and focused on the connections consistent across their subjects, to account for the variability in brain anatomy between individuals.

The researchers found 329 statistically significant anatomical connections between cortical regions out of 3003 potential between-region connections, a “sparsity” of 11%. They also identified the nodes and edges in their network that have betweeness values 1 SD above the mean. Betweeness is a measure of how often that vertex occurs on the shortest path between other vertices, and its relative importance to the network. Kind of cool because they can evaluate whether those vertices have also been shown to be important in previous non-DTI studies.

DTI will be a big part of the forthcoming human connectome project of the NIH. Resolution on the individual neuron scale is considered unrealistic by many, as Gong et al noted in their paper, and DTI is a viable alternative. Next steps would be connecting structure more with function, determining changes to the anatomy as a result of neurodegenerative diseases, and fixing methodological snags. DTI in the brain is poised to be very useful in the coming decades.

Reference

Gong G, et al. 2009 Mapping anatomical connectivity patterns of human cerebral cortex using in vivo diffusion tensor imaging tractography. Cerebral Cortex 19:524-536.

Scanning Neural Circuitry

Current scientific consensus is that connections between neurons are specific and based on neuron type, instead of being randomly distributed. Often this connectivity is inferred statistically, but this is haphazard and it would be much easier if the specific neural circuits could be mapped. Brigmann and Denk reviewed the extant attempts to make this happen back in 2006.

They note that since unmyelinated axons are about 100 nanometers in diameter and the thinnest parts of dendrites are about 50 nanometers, the resolution on the scanner must be minute. The only scanning technology currently capable of such a resolution is electron microscopy. Following staining with electron dense atoms on the region of interest, here are three methods that they discuss:

1) Serial section transmission electron microscopy. This is the oldest method, in which thick blocks of tissue are fixed in some sort of material (i.e., an aldehyde) and then cut into sections with a gem quality diamond knife on an ultramicrotome. These ribbon-like sections are transferred to the microscope, which records contrast based on higher elastic scattering of electrons in places where the electron dense atoms (i.e., the heavy metal) are more highly concentrated. It is this method which was used to map the entire neural circuitry of C. elegans, all 302 neurons and 5000 (!) chemical synaptic connections, using around 8000 serial sections that were each 50 nms thick. The problems with the technique are uneven section thickness, a lack of automation, geometrical distortion, debris found on the sections, and, perhaps most important, time constraints. The C. elegans study took 15 years to complete and that wasn’t even a very large volume of tissue, so more ambitious projects like in mice would likely require a more automated method to ensure feasibility.

2) Serial block-face scanning electron microscopy. This technique uses a custom microtome in a low-vacuum scanning electron microscope chamber to automate the sectioning and imaging of blocks of tissue. The images are formed based on electron scattering on the surface of an embedded tissue sample (i.e., the block face) before they are cut. They can achieve a lateral resolution of about 30 nm. The sections are then cut with an oscillating diamond knife. Hundreds of sequential sections can be imaged and cut this way with a thickness of only 30 nm each. Because the images are already aligned it is easier to automate the data analysis using a machine learning algorithm. Computational storage may actually be issue here, but in Moore’s Law we trust.

3) Serial section electron tomography. This technique uses multiple 2-d projections at various angles to reconstruct 3-d structure of the given tissue. Block tissues are sectioned relatively thickly (0.5-3 micrometers), and sections are imaged using high-voltage transmission electron microscopy at 1–2° increments. The advantage of this technique is that fewer tissues have to be damaged. The downside is that the tissue may be distorted and shrunk by the high energy electrons, but this can be mitigated with energy filtering techniques, which ultimately allows for a scanning resolution of 10 nm.

Brain scanning at a low level of resolution will constrain theories and allow for an amazing interplay between genetics, behavior, and circuitry. Of course it is not the last step–people still study C. elegans in droves, and for good reason! But when I look around at all of the avenues of research in neuroscience today, this seems to be one of if not the most important.

Reference

Briggman KL, Denk W. 2006 Towards neural circuit reconstruction with volume electron microscopy techniques. Current Opinion in Neurobiology 16:562-570. doi:10.1016/j.conb.2006.08.010.

Correlation between path length of resting state brain and IQ

The small world networks theory suggests that all of our neural networks can be categorized into two classes. First, there are neighborhood clusters of neurons with high levels of intraconnectivity that are capable of efficient local information processing. Then, there are several long-distance connections between these neighborhoods which provide for global communication and cross-modular integration.

Researchers van de Heuval et al recently administered  fMRI during rest on 19 subjects and gave them the Wechsler IQ test in order to determine whether the properties of the small world network would vary along with IQ scores. IQ scores showed no correlation with the total number of connections in the subject’s brain networks, nor did IQ correlate with levels of local neighborhood clustering. However,  the researchers did find a negative correlation with normalized path length between regions and IQ, with either r = -0.54 or r = -0.57, depending on the zero-lag correlation threshold the researchers set between the time series of the two given voxels, p<0.05 for both. A shorter interregion path length suggests a more efficient connectivity pattern, which is a plausible explanation for enhanced cognitive performance.

Previous neuroimaging studies have linked IQ to the structural organization of myelinated brain tissue on the micro scale, the total brain volume/focal brain structure, and the development/functional dynamics of specific higher-order regions. These results may or may not end up being useful to AI researchers, but of all neuroscience topics this seems to be one of the most relevant.

Reference

van den Heuvel MP, et al. 2009 Efficiency of functional brain networks and intellectual performance. The Journal of Neuroscience 29: 7619-7624. doi:10.1523/JNEUROSCI.1443-09.2009.

Mathematics representation in the brain

The left inferior frontal gyrus is activated during the processing of hierarchical structures in language, and Freidrich et al wondered whether a similar pattern would emerge when subjects processed information containing first order logic in the form of mathematics. Indeed, they found increased activation in the left inferior frontal gyrus (i.e., Broca’s area) using fMRI, but only when the subjects completed a heirarchical string as opposed during a simple list structure. Additionally of note, they found a significantly increased number of activation foci when the subjects had incorrect answers rather than correct ones, suggesting that the more uncertain answer is run through additional redunancy tests in the pursuit of error detection.

Reference

Friedrich R, Friederici AD. 2009 Mathematical Logic in the Human Brain: Syntax. PLoS ONE 4(5): e5599. doi:10.1371/journal.pone.0005599

Level set approach for brain tissue modeling

Brain atlas based techniques have proven to be a successful way to map  individual patient magnetic resonance brain volumes, by exploiting a priori knowledge of the “average” human brain. However, the variance of individual brain regions have proven so prohibitive that manual interaction is required to segment the and classify brain tissue as either white matter, grey matter, or cerebrospinal fluid. Bourouis et al (2008) propose a technique to do so automatically.

The first step is to correlate each voxel in the atlas space with one in the patient’s brain volume data, and then calculate the posterior probability that a voxel is assigned to a tissue class based on the intensity of the MRI data. After the algorithm converges, each voxel has been defined by the tissue class with the max posterior probability. Their model then takes into account geometric properties of the data such as the curvature and smoothness of the region, based on the level set approach. Results would still be validated by experts, but such an algorithm would help to automate a necessary but relatively time consuming step in identifying brain tumors.

Reference

Bourouis S, Jamrouni K, Betrouni N. 2008 Lecture notes in computer science.: Automatic MRI brain segmentation with combined atlas-based classification and level-set approach. In ICIAR 2008, LNCS 5112, pp. 770-778.

Mapping the genetics of the brain

Fascinating article by Jonah Lehrer about the work being done at the Allen Institute for Brain Science in Seattle, where they are attempting to create a genetic map of the human brain on a voxel scale. Here is one interesting tidbit:

They remain excited by the idea of working on the frontier of science, by the possibility that their maps will allow others to make sense of this still inscrutable landscape. In other words, they are waiting for the future, for some scientist to invent an elegant theory that explains their enigmatic data. Jones likes to compare the current state of neuroscience to 19th-century chemistry. At the time, chemists were strict empiricists; they set substances on fire and then recorded the colors visible in the flames. Different chemicals produced different spectrums of light, but nobody could make sense of the spectrums. The data seemed completely random. But then, with the discovery of quantum mechanics, scientists were finally able to explain the colored light—the unique rainbows were actually side effects of subatomic structure. Such is the faith of scientists: Nature must always make sense.

The article touches on some other theoretical and practical considerations of the scientists. I bet that Paul Allen, the $100 million dollar founder, is satisfied with his investment because this is awesome research.

Meta brain imaging studies with multilevel kernel density analysis

There is so much empirical data on brain imaging studies where subjects perform a specific task that it is crucial to perform meta analyses on the data to see how reliable the results are. Wager et al. (2008) set out to review attempts to do just this, multilevel kernel density analysis (MKDA) approach, which recreates a map of significant regions from each study in order to analyze the consistency (whether the same voxels appear in other studies) and the specificity (whether only those voxels appear in other studies for that task) of each. One of the major challenges in brain imaging (and multivariate statistics in general) is to keep the familywise error rate down to 0.05 as many different regions are analyzed.

In addition to considering these issues in-depth, they report on some pretty interesting data. By defining a peak 10 mm outside of the “consensus region” in a given meta-analysis as a non-replication, they can determine a rough false positive rate for that region. For working memory storage (26 studies, N=305), this rate is 40%; for executive working memory (60 studies, N=664), the rate is 20%; for emotion (163 studies, N=2010), the rate is 11%; for long term memory (166 studies, N=1877), the rate is 10%. Their data reveals two interesting trends:

1) Studies with larger sample sizes have more statistical power and are thus less likely to fall victim to a false positive error, and

2) The false positive rate in general (especially for working memory storage) is large enough to make this an area of concern.

Reference

Wager TD, Lindquist MA, Nichols TE, Kober H, Van Snellenberg VX. 2009 Evaluating the consistency and specificity of neuroimaging data using meta-analysis. Neuroimage 45:S210-S221.  doi:10.1016/j.neuroimage.2008.10.061.