Archive for the ‘Connectomics’ Category

Connectomics of zebrafish larvae

A nice study by Hildebrand et al. was published earlier this week, looking at the connectome of zebrafish larvae. As a reminder is what zebrafish larvae look like under the scanning microscope (this is one of my favorite images ever):

Screen Shot 2017-05-11 at 11.50.58 AM

Image of postnatal day 2 zebrafish larvae by Jurgen Berger and Mahendra Sonawane of the Max Planck Institute for Developmental Biology 

In this study, they did brute force serial sectioning of postnatal day 5 zebrafish larvae which they fed into a silicon wafer machine:

Screen Shot 2017-05-11 at 11.58.19 AM

Hildebrand et al 2017

They then were able to use the serial EM images to reconstruct myelinated axons and create some beautiful images:

Screen Shot 2017-05-11 at 12.02.15 PM

Hildebrand et al 2017

They found that the paired myelinated axons across hemispheres were more symmetrical than expected.

This means that their positions are likely pre-specified by the zebrafish genome/epigenome, rather than shifting due to developmental/electrical activity, as is thought to occur in the development of most mammalian axons.

While that is an interesting finding, clearly the main advance of this article is a technical one: being able to generate serial EM data sets like this on a faster and more routine basis may soon revolutionize the study of neuroscience. This is a tremendous achievement.

Read Full Post »

The history of neuroscience in general, and myelination in particular, is replete with comparisons between brains and computers.

For example, the first suggested function of myelin in the 1850s as an insulator of electricity was made by analogy to electric wires, which had just recently been developed.

In today’s high performance computers (“supercomputers”), one of the big bottlenecks in computer processing speed is communication between processors and memory units.

For example, one measure of computer communication speed is traversed edges per second (TEPS). This quantifies the speed with which data can be transferred between nodes of a computer graph.

A standard measure of TEPS is Graph500, which quantifies computer performance in a breadth-first search task on a large graph, and can require up to 1.1 PB of RAM. As of June 2016, these are the known supercomputers with the most TEPS:

Screen Shot 2017-04-18 at 1.43.01 PM

I’m pointing all of this out to give some concrete context about TEPS. Here’s the link to neuroscience: as AI Impacts discussed a couple of years ago, it seems that TEPS is a good way to quantify how fast brains can operate.

The best evidence for this is the growing body of data that memory and cognition require recurrent communication loops both within and between brain regions. For example, stereotyped electrical patterns with functional correlates can be seen in hippocampal-cortical and cortical-hippocampal-cortical circuits.

Here’s my point: we know that myelin is critical for regulating the speed at which between-brain region communication occurs. So, what we have learned about the importance of communication between processors in computers suggests that the degree of myelination is probably more important to human cognition than is commonly recognized. This in turn suggests:

  1. An explanation for why human cognition appears to be better in some ways than primates: human myelination patterns are much more delayed, allowing for more plasticity in development. Personally, I expect that this explain more human-primate differences in cognition than differences in neurons (granted I’m not an expert in this field!).
  2. Part of an explanation for why de- and dys-myelinating deficits, even when they are small, can affect cognition in profound ways.


Read Full Post »

It has been well-established for over a decade that synaptic vesicle release further away from a particular receptor cluster is associated with a decreased probability of receptor open state and therefore a decreased postsynaptic current (at least at glutamatergic synapses).


Franks et al 2003; PMC2944019

A few months ago Tang et al published an article in which they reported live imaging of cultured rat hippocampal neurons to investigate this.

They showed that critical vesicle priming and fusion proteins are preferentially found near to one another within presynaptic active zones. Moreover, these regions were associated with higher levels of postsynaptic receptors and scaffolding proteins.

On this basis, the authors suggest that there are transynaptic columns, which they call “nanocolumns” (I employ scare quotes here quite intentionally because I don’t prefix any word with nano- until I am absolutely forced to).

They have a nice YouTube video visualizing this arrangement at a synapse:

They propose that this arrangement allows the densest presynaptic active zones to match the densest postsynaptic receptor densities, maximizing the efficiency, and therefore strength, of the synapse.

In their most elegant validation experiment of this model, they inhibited synapses by activating postsynaptic NMDA receptors and found that this led to a decreased correspondence between synaptic active zones and postsynaptic densities (PSDs).


Tang et al 2016; doi:10.1038/nature19058

As you can see, the time-scale of the effect of NMDA receptor activation was pretty fast, at only 5 mins. My guess is that this effect is so fast because active positive regulation maintains the column organization, and without it, proteins rapidly diffuse away.

It is almost certain that synaptic cleft adhesion systems or retrograde signaling mechanisms regulate synaptic column organization, and the race is on to identify them and precisely how they work.

In the meantime, Tang et al’s work is a great example of synaptic strength variability that is dependent on protein localization, and should inform our models of how the brain works.

Read Full Post »

There are three types of experiments one can perform in neuroscience: lesions, stimulations, and recording. Obviously, a particular study can use more than one of them.

Screen Shot 2016-07-05 at 1.46.12 PM

The most basic natural experiment that one can harness in neuroscience is to study lesions, due to problems in development, disease, and/or trauma.

Of these, perhaps the most striking lesions come from patients with severe hydrocephalus. Hydrocephalus is the accumulation of cerebrospinal fluid in the brain that causes ventricles to enlarge and compress the surrounding brain tissue.

A 2007 case study by Feuillet et al. of a 44-year old man with an IQ of 75 and a civil-servant career is probably the most famous, since they provide a nice brain set of brain scans of the person:

Screen Shot 2016-07-05 at 1.08.10 PM

LV = lateral ventricle; III = third ventricle; IV = fourth ventricle; image from Feuillet et al. 2007

A 1980 paper is also famous for its report of a person with an IQ of 126 and an impressive educational record who also had extensive hydrocephalus. But no image, so not quite as famous.

The 2007 case has been cited as evidence to a) question dogma about the role of the brain in consciousness, b) speculate on how two minds might coalesce following mind uploading, and c) — of course — postulate the existence of extracorporeal information storage. There are also some great comments about this topic at Neuroskeptic.

As far as I can tell, volume loss in moderate hydrocephalus is initially and primarily due to compression of white matter just adjacent to ventricles. On the other hand, in severe hydrocephalus such as the above, the grey matter and associated neuropil also must be compressed.

Most of the cases with normal cognition appear to be due to congenital or developmental hydrocephalus, causing a slow change in brain structure. On the other hand, rapid changes in brain structure due to acute hydrocephalus, such as following trauma, are more likely to lead to more dramatic changes in cognition.

What can we take away from this? A couple of things:

  1. This is yet another example of the remarkable long-term plasticity of both the white matter and the grey matter of the brain. Note that this plasticity is not always a good thing, but yes, it exists and can be profound.
  2. It is evidence for hypotheses that the relative positions and locations of neurons and other brain cell types in the brain is the critical component of maintaining cognition and continuity of consciousness, as opposed to their absolute positions in space within the brain. An example of a theory in the supported class is Seung’s “you are your connectome” theory.
  3. Might it not make the extracellular space theories of memory a little less plausible?

Read Full Post »

A nice, basic study looks at how altering the location of inhibition onto a pyramidal cell neurite affects its spiking properties. Their inhibition is meant to mimic the effects of cortical interneurons (e.g., basket cells, Martinotti cells), which project onto pyramidal cells each with their own stereotyped spatial distributions.

Elucidating these basic structure-function relationships will make synapse-level connectomics data more useful to determine the function of interneuron types.

Here’s just one of many examples in their extensive report. When they applied inhibition (GABA) to pyramidal cell dendrites further from the soma than their excitatory signal (laser-based glutamate uncaging), which they called “distal inhibition”, it led to an increased threshold required for a spike to occur. But, it didn’t change the intensity of that spike when it did occur.

In constrast, when they applied inhibition to pyramidal cell dendrites between the excitatory signal and the soma, which they called “on-the-path inhibition”, it both slightly increased the depolarization threshold and reduced the spike heights when they did occur. You can see this all below.

As an example of how this could be used, let’s say that, on the basis of connectomics data, you discover that a certain set of cells send projections to pyramidal cells which are systematically distal to the projections from a different set of cells.

What you can then say is that the former class of cells is acting to increase the depolarization threshold which the latter set of cells needs to exceed in order to induce those pyramidal cells to spike. Pretty cool.


Jadi M, Polsky A, Schiller J, Mel BW (2012) Location-Dependent Effects of Inhibition on Local Spiking in Pyramidal Neuron Dendrites. PLoS Comput Biol 8(6): e1002550. doi:10.1371/journal.pcbi.1002550

Read Full Post »

1) Scale mismatch between the synapse-synapse level and the kind of description you want to acquire about the nervous system for a particular goal. He argues that the point at which the interesting neural computation works might be at the mesoscale. It might be enough to know the statistics of how nerve cells work at the synapse level if you want to predict behavior.

2) Structure-function relationships are elusive in the nervous system. It’s harder to understand the information that is being propagated to the nervous system because its purpose is so much more nebulous than a typical organ, like a kidney.

3) Computation-substrate relationships are elusive in general. The structure of an information processing machine doesn’t tell you about the processing it performs. For example, you can analyze in finest detail the microprocessor structure in a computer, and it will constrain the possible ways it can act, but it won’t tell you what actual operating system it is using.

Here is a link to the video of Movshon’s opening remarks. He also mentions the good-if-true point that the set of connections of C. elegans is known, but our understanding of its physiology hasn’t “been materially enhanced” by having that connectome.

The rest of the debate was entertaining but not all that vitriolic. Movshon and Seung do not appear to disagree on all that much.

I personally lean towards Seung’s side. This is not so much due to the specifics (many of which can be successfully quibbled with), but rather due to the reference class of genomics, a set of technologies and methods which have proven to be very fruitful and well worth the investment.

Read Full Post »

During development, one axon typically comes to dominate each set of synaptic sites at a neuromuscular junction. This means that just one neuron controls each muscle fiber, allowing for specificity of motor function.

A nice application of laser irradiation allows researchers to intervene in the formation of axonal branches in developing mice to study this.

What they found was that irradiating the axon currently occupying the site spurred a sequence of events (presumably involving molecular signaling) that led nearby axons (often smaller ones) to take it over.

A 67 second, soundless video of one 1,000-step simulation of this process demonstrates the concepts behind this finding.

In the simulation, each circle represents a synaptic site, and each color an innervating axon. There are originally six colors.

At each of the 1,000 time steps, one axon is withdrawn from a randomly chosen site, and an adjacent one (possibly of the same color) takes it over.

The territory controlled by one axon increases (with negative acceleration) until it comes to dominate all the sites.

Although it is possible that a qualitatively different process occurs for axonal inputs to nerve cells, odds are that a similar sort of evolution via competition helps drive CNS phenomena such as memory. (Because evolution tends to re-use useful processes.)


Turney SG, Lichtman JW (2012) Reversing the Outcome of Synapse Elimination at Developing Neuromuscular Junctions In Vivo: Evidence for Synaptic Competition and Its Mechanism. PLoS Biol 10(6): e1001352. doi:10.1371/journal.pbio.1001352

Read Full Post »

Older Posts »