The history of neuroscience in general, and myelination in particular, is replete with comparisons between brains and computers.

For example, the first suggested function of myelin in the 1850s as an insulator of electricity was made by analogy to electric wires, which had just recently been developed.

In today’s high performance computers (“supercomputers”), one of the big bottlenecks in computer processing speed is communication between processors and memory units.

For example, one measure of computer communication speed is traversed edges per second (TEPS). This quantifies the speed with which data can be transferred between nodes of a computer graph.

A standard measure of TEPS is Graph500, which quantifies computer performance in a breadth-first search task on a large graph, and can require up to 1.1 PB of RAM. As of June 2016, these are the known supercomputers with the most TEPS:

Screen Shot 2017-04-18 at 1.43.01 PM

I’m pointing all of this out to give some concrete context about TEPS. Here’s the link to neuroscience: as AI Impacts discussed a couple of years ago, it seems that TEPS is a good way to quantify how fast brains can operate.

The best evidence for this is the growing body of data that memory and cognition require recurrent communication loops both within and between brain regions. For example, stereotyped electrical patterns with functional correlates can be seen in hippocampal-cortical and cortical-hippocampal-cortical circuits.

Here’s my point: we know that myelin is critical for regulating the speed at which between-brain region communication occurs. So, what we have learned about the importance of communication between processors in computers suggests that the degree of myelination is probably more important to human cognition than is commonly recognized. This in turn suggests:

  1. An explanation for why human cognition appears to be better in some ways than primates: human myelination patterns are much more delayed, allowing for more plasticity in development. Personally, I expect that this explain more human-primate differences in cognition than differences in neurons (granted I’m not an expert in this field!).
  2. Part of an explanation for why de- and dys-myelinating deficits, even when they are small, can affect cognition in profound ways.


A key question in the treatment of depression is: what is the probability that a given treatment will lead to a sustained remission of symptoms?

One of the largest, most famous studies to address this is called the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial.

From what I understand, the researchers designed the trial to mimic what might happen in a realistic clinical setting.

A patient diagnosed with major depressive disorder (MDD) might be first started on a first-line drug (citalopram). If that didn’t work (because the side effects were untolerable, or the symptoms persisted), then another drug would be chosen, and so on. Here is their algorithm:

Screen Shot 2017-04-02 at 7.56.13 PM

They used a unique randomization strategy, as participants in Level 2 could choose to opt out of the randomization blocks that entailed a) switching off of citalopram, b) augmenting citalopram with a different drug, and/or c) using cognitive therapy.

From the numbers above, you can see that most common option was for participants to opt out of cognitive therapy. This is probably in part accounted for by a selection bias for participants who would enter the citalopram-based trial in the first level.

One of the main outcomes was the remission rates from depression (defined as QIDS-SR16 score of <= 5) at the various stages:

  • For step 1, the remission rate for those not treated for their current episode was 43%, vs 36% for those already treated for their current episodes
  • For step 2, the remission rate was 30%
  • For step 3, the remission rate was 14%
  • For step 4, the remission rate was 13%
  • Assuming that none of the participants existed the study and stayed in treatment, the theoretical remission rate after a maximum of four treatment steps was 67%

They also looked at the 12-month follow up of these same participants. Of those, the proportion without a relapse (defined as QIDS-SR16 >= 11) was ~50% in the participants who had a remission of symptoms following step 1, and ~ 33% in the participants who had a remission of symptoms following step 2.

This data set has been analyzed in many other ways. For example, after unsuccessful treatment with the SSRI citalopram, there was no difference in the remission rates of buproprion, sertraline, and venlafaxine. On the other hand, augmenting citalopram with buproprion led to a greater reduction in symptoms and had fewer side effects compared with augmenting with buspirone.


What is the mechanism by which dendritic spines can change structure over a rapid time course? Though this may seem esoteric, it is probably how memories form and is thus utterly essential to neuroscience. Two new papers present some relevant data.


Two-photon imaging data of dendritic spines, from Wikipedia User:Tmhoogland

First, as has been shown several times, Harward et al show that glutamate uncaging at single dendritic spines leads to a rapid increase in spine volume after only ~ 1 minute that degrades over a period of several more minutes:


Harward et al; doi:10.1038/nature19766

Along the same time course as the dendritic spine volume increase, these authors also detected TrkB activation (using their amazing new FRET sensor), which was largely in the activated spine but also traveled to nearby spines and the dendrite itself:


Harward et al; doi:10.1038/nature19766


In what is to me probably their most compelling experiment, they show that hippocampal slices without BDNF have highly impaired volume changes in response to glutamate, and that this can be rescued by the addition of BDNF:


Harward et al; doi:10.1038/nature19766

They also present several lines of evidence that this is an autocrine mechanism, with BDNF released from spines by exosomes and binding to TrkB receptors on the same spine.

In a separate article in which most of the same authors contributed, they show that another protein, Rac1, is activated (ie, GTP-bound, leading to fluorescence) very quickly following glutamate uncaging at single spines:



Hedrick et al; doi:10.1038/nature19784

They also show that a similar rapid course of activation following glutamate uncaging occurs for the other Rho GTPases Cdc42 and RhoA.

Interestingly, they also show that these proteins mediate synaptic crosstalk, whereby the activation of one dendritic spines causes nearby dendritic spines to increase in strength. After several more experiments, here is their diagram explaining this mechanism:


Hedrick et al; doi:10.1038/nature19784

Overall I find their data trustworthy and important. The most interesting subsequent question for me is whether endogenous amounts of CaMKII, BDNF, TrkB, and Rho GTPase signaling components (e.g., Cdc42, RhoA, Rac1) vary across dendritic spines, and whether this helps mediate variability in spine-specific and spine neighbor-specific degrees of plasticity. My guess is that they do, but AFAICT it remains to be shown.

If it is true that spines, dendrites, and neurons vary in the expression and distribution of these proteins, then any attempt to build models of the brain, as well as models of individual brains that have any sort of dynamic component, probably need to measure and model the local densities of these protein mediators of plasticity.

CSF- and serum-borne autoantibodies against brain proteins are known to cause a wide range of cognitive sequelae due autoimmune attack. For example, when antibodies are raised against the protein LGI1, which is thought to act as a voltage-gated K+ channel, a common result is encephalopathy.

As a result, LGI1 is often included in autoimmune panels, along with several other proteins including CASPR2, NMDA and AMPA subunits, GABA-B receptors, GAD65, CRMp-5, ANNA-1, and ANNA-2.

Recently, Ariño et al presented a summary of 76 patients with LGI1-associated cognitive deterioration, 13% of which had forms of cognitive deterioration distinct from limbic encephalitis. At 2 years their major outcomes were:

  • 35% fully recovered
  • 35% regained independence but to baseline levels
  • 23% required assistance due to cognitive defects
  • 6% died

In mice, LGI1 is primary expressed at the RNA level in neurons at the RNA level, while in humans its expressed in both mature astrocytes and neurons (data from here and here), eg in the Darmanis et al 2015 human data set its actually expressed higher in astrocytes:


It might be interesting to see whether encephalopathies are generally only caused by autoantibodies against proteins expressed in neurons, or whether or cell type-expressed proteins can also lead to a similar clinical outcome.


It has been well-established for over a decade that synaptic vesicle release further away from a particular receptor cluster is associated with a decreased probability of receptor open state and therefore a decreased postsynaptic current (at least at glutamatergic synapses).


Franks et al 2003; PMC2944019

A few months ago Tang et al published an article in which they reported live imaging of cultured rat hippocampal neurons to investigate this.

They showed that critical vesicle priming and fusion proteins are preferentially found near to one another within presynaptic active zones. Moreover, these regions were associated with higher levels of postsynaptic receptors and scaffolding proteins.

On this basis, the authors suggest that there are transynaptic columns, which they call “nanocolumns” (I employ scare quotes here quite intentionally because I don’t prefix any word with nano- until I am absolutely forced to).

They have a nice YouTube video visualizing this arrangement at a synapse:

They propose that this arrangement allows the densest presynaptic active zones to match the densest postsynaptic receptor densities, maximizing the efficiency, and therefore strength, of the synapse.

In their most elegant validation experiment of this model, they inhibited synapses by activating postsynaptic NMDA receptors and found that this led to a decreased correspondence between synaptic active zones and postsynaptic densities (PSDs).


Tang et al 2016; doi:10.1038/nature19058

As you can see, the time-scale of the effect of NMDA receptor activation was pretty fast, at only 5 mins. My guess is that this effect is so fast because active positive regulation maintains the column organization, and without it, proteins rapidly diffuse away.

It is almost certain that synaptic cleft adhesion systems or retrograde signaling mechanisms regulate synaptic column organization, and the race is on to identify them and precisely how they work.

In the meantime, Tang et al’s work is a great example of synaptic strength variability that is dependent on protein localization, and should inform our models of how the brain works.

Interesting article from Gizowski et al last week on the role of the suprachiasmatic nucleus in regulating pre-sleep anticipatory thirst via the OVLT in mice.

In med school we had to memorize the major role of both of these regions, and my mnemonics were that the suprachiasmatic nucleus makes you charismatic (because you are well-rested), while the OVLT controls how much ovaltine you should drink.

Rodents are known to increase their fluid intake 1-2 hours before sleeping, which is called anticipatory thirst because the rodents want to make sure that they will have enough water in them to make it through the night (presumably without a dry mouth!).

Gizowski et al’s definitive experiment to show the interplay between these brain regions involved expressing two types of channelrhodopsins in vasopressin-expressing neurons in the suprachiasmatic nucleus in two groups of mice.

This manipulation allowed them to shine blue light in some mice to cause activation of the SCN -> OVLT pathway (G below), and yellow light in others to inhibit this pathway (H below). Here’s what they found:



As you can see, blue light in the blue-responsive mice caused increased water intake PRIOR to the normal anticipatory thirst at levels ~ 3x above baseline.

On the other hand, yellow light in the yellow-responsive mice caused decreased water intake at the expected anticipatory thirst to an insane degree. Basically, responsive mice didn’t drink at all when they shined yellow light and inactivated the SCN -> OVLT pathway.

This effect of yellow light seems too strong to me. Shouldn’t it just be stopping the increase that is typically seen pre-sleep? Instead, it seems to be completely eliminating drinking behavior entirely.

One way to explain this finding is that the SCN -> OVLT pathway might be active at other times other than pre-sleep, and it is just MORE active in the hour or two before sleep.

Despite a pretty extensive search, I can’t figure out whether humans have also been shown to have increased thirst prior to sleep.

On one hand, there are plenty of anecdotal reports of this, while on the other hand, rodents and humans have pretty different life histories. Plus, the authors probably would’ve mentioned it if there was good evidence of pre-sleep anticipatory thirst in humans.

Even if humans don’t have pre-sleep anticipatory thirst, this is still quite an interesting study, as this system is likely a good model of how suprachiasmatic nucleus axons project (with vasopressin-producing neurons?) to several other brain regions to control activities that are regulated by the perceived time of day.

There are three types of experiments one can perform in neuroscience: lesions, stimulations, and recording. Obviously, a particular study can use more than one of them.

Screen Shot 2016-07-05 at 1.46.12 PM

The most basic natural experiment that one can harness in neuroscience is to study lesions, due to problems in development, disease, and/or trauma.

Of these, perhaps the most striking lesions come from patients with severe hydrocephalus. Hydrocephalus is the accumulation of cerebrospinal fluid in the brain that causes ventricles to enlarge and compress the surrounding brain tissue.

A 2007 case study by Feuillet et al. of a 44-year old man with an IQ of 75 and a civil-servant career is probably the most famous, since they provide a nice brain set of brain scans of the person:

Screen Shot 2016-07-05 at 1.08.10 PM

LV = lateral ventricle; III = third ventricle; IV = fourth ventricle; image from Feuillet et al. 2007

A 1980 paper is also famous for its report of a person with an IQ of 126 and an impressive educational record who also had extensive hydrocephalus. But no image, so not quite as famous.

The 2007 case has been cited as evidence to a) question dogma about the role of the brain in consciousness, b) speculate on how two minds might coalesce following mind uploading, and c) — of course — postulate the existence of extracorporeal information storage. There are also some great comments about this topic at Neuroskeptic.

As far as I can tell, volume loss in moderate hydrocephalus is initially and primarily due to compression of white matter just adjacent to ventricles. On the other hand, in severe hydrocephalus such as the above, the grey matter and associated neuropil also must be compressed.

Most of the cases with normal cognition appear to be due to congenital or developmental hydrocephalus, causing a slow change in brain structure. On the other hand, rapid changes in brain structure due to acute hydrocephalus, such as following trauma, are more likely to lead to more dramatic changes in cognition.

What can we take away from this? A couple of things:

  1. This is yet another example of the remarkable long-term plasticity of both the white matter and the grey matter of the brain. Note that this plasticity is not always a good thing, but yes, it exists and can be profound.
  2. It is evidence for hypotheses that the relative positions and locations of neurons and other brain cell types in the brain is the critical component of maintaining cognition and continuity of consciousness, as opposed to their absolute positions in space within the brain. An example of a theory in the supported class is Seung’s “you are your connectome” theory.
  3. Might it not make the extracellular space theories of memory a little less plausible?