Feeds:
Posts
Comments

Archive for the ‘Connectomics’ Category

It has been well-established for over a decade that synaptic vesicle release further away from a particular receptor cluster is associated with a decreased probability of receptor open state and therefore a decreased postsynaptic current (at least at glutamatergic synapses).

screen-shot-2016-10-10-at-5-57-40-pm

Franks et al 2003; PMC2944019

A few months ago Tang et al published an article in which they reported live imaging of cultured rat hippocampal neurons to investigate this.

They showed that critical vesicle priming and fusion proteins are preferentially found near to one another within presynaptic active zones. Moreover, these regions were associated with higher levels of postsynaptic receptors and scaffolding proteins.

On this basis, the authors suggest that there are transynaptic columns, which they call “nanocolumns” (I employ scare quotes here quite intentionally because I don’t prefix any word with nano- until I am absolutely forced to).

They have a nice YouTube video visualizing this arrangement at a synapse:

They propose that this arrangement allows the densest presynaptic active zones to match the densest postsynaptic receptor densities, maximizing the efficiency, and therefore strength, of the synapse.

In their most elegant validation experiment of this model, they inhibited synapses by activating postsynaptic NMDA receptors and found that this led to a decreased correspondence between synaptic active zones and postsynaptic densities (PSDs).

screen-shot-2016-10-10-at-6-22-52-pm

Tang et al 2016; doi:10.1038/nature19058

As you can see, the time-scale of the effect of NMDA receptor activation was pretty fast, at only 5 mins. My guess is that this effect is so fast because active positive regulation maintains the column organization, and without it, proteins rapidly diffuse away.

It is almost certain that synaptic cleft adhesion systems or retrograde signaling mechanisms regulate synaptic column organization, and the race is on to identify them and precisely how they work.

In the meantime, Tang et al’s work is a great example of synaptic strength variability that is dependent on protein localization, and should inform our models of how the brain works.

Read Full Post »

There are three types of experiments one can perform in neuroscience: lesions, stimulations, and recording. Obviously, a particular study can use more than one of them.

Screen Shot 2016-07-05 at 1.46.12 PM

The most basic natural experiment that one can harness in neuroscience is to study lesions, due to problems in development, disease, and/or trauma.

Of these, perhaps the most striking lesions come from patients with severe hydrocephalus. Hydrocephalus is the accumulation of cerebrospinal fluid in the brain that causes ventricles to enlarge and compress the surrounding brain tissue.

A 2007 case study by Feuillet et al. of a 44-year old man with an IQ of 75 and a civil-servant career is probably the most famous, since they provide a nice brain set of brain scans of the person:

Screen Shot 2016-07-05 at 1.08.10 PM

LV = lateral ventricle; III = third ventricle; IV = fourth ventricle; image from Feuillet et al. 2007

A 1980 paper is also famous for its report of a person with an IQ of 126 and an impressive educational record who also had extensive hydrocephalus. But no image, so not quite as famous.

The 2007 case has been cited as evidence to a) question dogma about the role of the brain in consciousness, b) speculate on how two minds might coalesce following mind uploading, and c) — of course — postulate the existence of extracorporeal information storage. There are also some great comments about this topic at Neuroskeptic.

As far as I can tell, volume loss in moderate hydrocephalus is initially and primarily due to compression of white matter just adjacent to ventricles. On the other hand, in severe hydrocephalus such as the above, the grey matter and associated neuropil also must be compressed.

Most of the cases with normal cognition appear to be due to congenital or developmental hydrocephalus, causing a slow change in brain structure. On the other hand, rapid changes in brain structure due to acute hydrocephalus, such as following trauma, are more likely to lead to more dramatic changes in cognition.

What can we take away from this? A couple of things:

  1. This is yet another example of the remarkable long-term plasticity of both the white matter and the grey matter of the brain. Note that this plasticity is not always a good thing, but yes, it exists and can be profound.
  2. It is evidence for hypotheses that the relative positions and locations of neurons and other brain cell types in the brain is the critical component of maintaining cognition and continuity of consciousness, as opposed to their absolute positions in space within the brain. An example of a theory in the supported class is Seung’s “you are your connectome” theory.
  3. Might it not make the extracellular space theories of memory a little less plausible?

Read Full Post »

A nice, basic study looks at how altering the location of inhibition onto a pyramidal cell neurite affects its spiking properties. Their inhibition is meant to mimic the effects of cortical interneurons (e.g., basket cells, Martinotti cells), which project onto pyramidal cells each with their own stereotyped spatial distributions.

Elucidating these basic structure-function relationships will make synapse-level connectomics data more useful to determine the function of interneuron types.

Here’s just one of many examples in their extensive report. When they applied inhibition (GABA) to pyramidal cell dendrites further from the soma than their excitatory signal (laser-based glutamate uncaging), which they called “distal inhibition”, it led to an increased threshold required for a spike to occur. But, it didn’t change the intensity of that spike when it did occur.

In constrast, when they applied inhibition to pyramidal cell dendrites between the excitatory signal and the soma, which they called “on-the-path inhibition”, it both slightly increased the depolarization threshold and reduced the spike heights when they did occur. You can see this all below.

As an example of how this could be used, let’s say that, on the basis of connectomics data, you discover that a certain set of cells send projections to pyramidal cells which are systematically distal to the projections from a different set of cells.

What you can then say is that the former class of cells is acting to increase the depolarization threshold which the latter set of cells needs to exceed in order to induce those pyramidal cells to spike. Pretty cool.

Reference

Jadi M, Polsky A, Schiller J, Mel BW (2012) Location-Dependent Effects of Inhibition on Local Spiking in Pyramidal Neuron Dendrites. PLoS Comput Biol 8(6): e1002550. doi:10.1371/journal.pcbi.1002550

Read Full Post »

1) Scale mismatch between the synapse-synapse level and the kind of description you want to acquire about the nervous system for a particular goal. He argues that the point at which the interesting neural computation works might be at the mesoscale. It might be enough to know the statistics of how nerve cells work at the synapse level if you want to predict behavior.

2) Structure-function relationships are elusive in the nervous system. It’s harder to understand the information that is being propagated to the nervous system because its purpose is so much more nebulous than a typical organ, like a kidney.

3) Computation-substrate relationships are elusive in general. The structure of an information processing machine doesn’t tell you about the processing it performs. For example, you can analyze in finest detail the microprocessor structure in a computer, and it will constrain the possible ways it can act, but it won’t tell you what actual operating system it is using.

Here is a link to the video of Movshon’s opening remarks. He also mentions the good-if-true point that the set of connections of C. elegans is known, but our understanding of its physiology hasn’t “been materially enhanced” by having that connectome.

The rest of the debate was entertaining but not all that vitriolic. Movshon and Seung do not appear to disagree on all that much.

I personally lean towards Seung’s side. This is not so much due to the specifics (many of which can be successfully quibbled with), but rather due to the reference class of genomics, a set of technologies and methods which have proven to be very fruitful and well worth the investment.

Read Full Post »

During development, one axon typically comes to dominate each set of synaptic sites at a neuromuscular junction. This means that just one neuron controls each muscle fiber, allowing for specificity of motor function.

A nice application of laser irradiation allows researchers to intervene in the formation of axonal branches in developing mice to study this.

What they found was that irradiating the axon currently occupying the site spurred a sequence of events (presumably involving molecular signaling) that led nearby axons (often smaller ones) to take it over.

A 67 second, soundless video of one 1,000-step simulation of this process demonstrates the concepts behind this finding.

In the simulation, each circle represents a synaptic site, and each color an innervating axon. There are originally six colors.

At each of the 1,000 time steps, one axon is withdrawn from a randomly chosen site, and an adjacent one (possibly of the same color) takes it over.

The territory controlled by one axon increases (with negative acceleration) until it comes to dominate all the sites.

Although it is possible that a qualitatively different process occurs for axonal inputs to nerve cells, odds are that a similar sort of evolution via competition helps drive CNS phenomena such as memory. (Because evolution tends to re-use useful processes.)

Reference

Turney SG, Lichtman JW (2012) Reversing the Outcome of Synapse Elimination at Developing Neuromuscular Junctions In Vivo: Evidence for Synaptic Competition and Its Mechanism. PLoS Biol 10(6): e1001352. doi:10.1371/journal.pbio.1001352

Read Full Post »

Trends in neurodevelopment are, at least to me, a bit counterintuitive. It is surprising that there would be the most synaptic connections in humans at ~ 8 months after birth rather than, say, 18 years. But following the logic of synaptic pruning, this is the world we live in.

Using light and electron microscopy, a new study sheds some light on these processes. The authors provide quantitative measurements of the trade-off to large numbers of synapses in newborn mice, which is that each individual axon and synapse is smaller.

They study the motor axons of neuromuscular junctions, but presumably the same patterns of redistribution generalize to elsewhere in the nervous system. Some of their findings:

  • At birth, the main branch of the motor axons entering muscles had an average diameter of 1.48 ±0.03 μm, compared to 4.08 ±0.07 μm at 2 weeks old
  • In the cleidomastoid, at birth each motor axon innervated an average of 221 ±6.1 different muscle fibers, compared to 18.8 ±3.0 at 2 weeks old
  • At embryonic day 18, each terminal axon branch covered an average of 14.2 ±11.4% of the muscle’s acetylcholine receptors, compared to ~100% by single axons in adults

These results and others in the paper show that although there are fewer total synapses in later stages of development, each axon/synapse is bigger and more specific.

Reference

Tapia JC, et al. Pervasive synaptic branch removal in the Mammalian neuromuscular system at birth. 2012 Neuron, PMID: 22681687.

Read Full Post »

In order to make serial section electron microscopy neurite reconstruction truly high-throughput, it will be essential to find a way to automate the image recognition component. Unfortunately, as I’ve written before, it’s quite difficult to segment and recognize patterns in electron microscopy images.

Inspired by other citizen science approaches, Sebastian Seung & co have come up with the possibly ingenious idea of enlisting the help of the everyman in this task. Their website is called Eyewire. It challenges users to reconstruct ganglion cells from electron microscopic images in the retina.

The images are stained in their cell membranes via a dye to create contrast. In theory, this contrast allows machines and humans to distinguish precisely where the neurite travels. In practice, the dye can invade to organelles, creating noise, or it can stain the cell membrane incompletely, creating artifacts.

Or, the machine learning algorithm might just miss it, because of some sort of bias, like missing boundaries that are outside of its field of view. This is where you come in. Your task is to move from slide to slide and pick out the regions that the algorithm misses.

I just opened up the game and in the first section I was assigned, I came upon this error. Here’s the first slide, which, as you can see, is completely filled in within its stained cell membrane boundaries:

And here’s the next image stack up:

As you can see, but for whatever reason the ML algorithm cannot, there is a hole in the second image which should be filled in. Eyewire allows you to do this yourself,

by filling the hole in with the light teal.

Sometimes the missing holes are more consequential. Filling in some holes means that whole undiscovered branches of a neurite can be found.

In a very nice feature, the algorithm automatically propagates your changes to the rest of the image stacks, so that you don’t have to do so manually.

When you have enough people doing this, the results can be pretty interesting. For example, here is the current reconstructed version of cell #6:

How would you go about quantifying the branching neurites of this neuron and what can you learn from its structure about how it works? These are the kinds of questions that we’ll be able to address as we collect more of these.

Sebastian Seung calls the game “meditative.” In the hours I’ve played so far (my account name is porejide), I have found it quite fun when it’s working fast and I can zoom through the stacks.

On the other hand, at times the internet connection at my house couldn’t really keep up, leading to some lag, which caused me to experience a sensation that I would not call meditative. But perhaps that’s just the fault of my internet connection.

One angle that I especially appreciate is the friendly competition between users. After you fill in a set of image stacks, the game rewards you with a number of points that is meant to be proportional to what you accomplished.

I have no small amount of pride in reporting that yesterday I played well enough (and for long enough) to reach #2 in points for the day, with 981 points, although xo3k was way ahead of me with 3450. As I was playing I could see user vienna717 was gaining ground on me quickly, which gave me the competitive juices I needed to go faster.

This is a great infrastructure, and has the potential to get even more fun if they gamify it further. For example, perhaps users could join teams with other people and play for a glory greater than the self.

This all sounds dandy, but what if you don’t care about retinal ganglion cells? Frankly, I don’t care that much myself. To the best of my understanding, the main thrust of the game is not to build the 3d maps of these ganglion cells, although that will be informative.

Rather, the idea is to provide a huge training set for machine learning algorithms, so that they can learn to better incorporate the insights of humans. This will scale much better than having humans do it, and will in theory allow us to reconstruct neural connections on much larger scales.

This, in turn, will allow us to rigorously test some of the most fundamental questions in neuroscience.

There is no guarantee that Seung & co’s approach will actually get us there, and even if it does, it will take a lot of time and effort. In the meantime, I’ll see you on the leaderboard!

Read Full Post »

Older Posts »