Archive for the ‘Theoretical Neuroscience’ Category

As one of my manifestations of intellectual contrarianism, I like to collect historical examples of times when a largish group of scientists thought that a complicated theory was the best way to explain a set of facts, but then a more simple explanation turned out to be much better.

I especially like examples of this in neuroscience, where people are wont to postulate complicated theories about the way that we think.

There is perhaps no better example than the debate between the reticular theory of the nervous system and the neuron doctrine.

The reticular theory postulated a form of exceptionalism in the nervous system: that axons and dendrites seen on light microscopy were not attached to cells but were in fact a separate, non-cellular entity, forming their own protoplasmic network.

The neuron doctrine is, at least in hindsight, much simpler, postulating that axons and dendrites are extensions of cells, as occurs in other types of biology.


Cajal’s drawing of neurons in the chick cerebellum, from Wikipedia

The reticular theory had many proponents, including Camillo Golgi and Franz Nissl, and lasted from 1840-1935. It’s easy to dismiss it now, but it was a reasonable idea at the time.

Now, though, it’s an good example of how theories that postulate that the brain is extremely complicated and different than other types of biology do not have a good track record.

Read Full Post »

In investigating a crime, to pinpoint the culprit, the saying goes, “follow the money.” In science, the saying is (or at least, should be), “follow the ATP.”

A six month old paper acts as a nice review on this topic. The authors stratify tissue types based on the degree of myelination (none, developing, and adult). This is shown here,


  • action potential use is on voltage-gated Na+/K+-ATPases
  • synapse use is on postsynaptic membrane currents, presynaptic calcium entry, and neurotransmitter/vesicle cycling
  • oligodendrocyte resting potential use is continuous Na+/K+ pumps
  • housekeeping use is on protein/lipid synthesis and intracellular trafficking of molecules/organelles

That’s way more than I would have expected on housekeeping. But by far their most surprising finding is that the cost of maintaining the resting potentials in oligodendrocytes is so large that myelination doesn’t usually save energy on net–it depends on the firing rate of the neuron. That’s a heterodox bomb.

I suppose that myelination not leading to energy saving is weak evidence in favor of it doing something else, aside from speeding up spikes. Like, allowing for plasticity.


Harris; Atwood (2012). “The Energetics of CNS White Matter”Journal of NeuroscienceDOI:10.1523/JNEUROSCI.3430-11.2012

Read Full Post »

During development, one axon typically comes to dominate each set of synaptic sites at a neuromuscular junction. This means that just one neuron controls each muscle fiber, allowing for specificity of motor function.

A nice application of laser irradiation allows researchers to intervene in the formation of axonal branches in developing mice to study this.

What they found was that irradiating the axon currently occupying the site spurred a sequence of events (presumably involving molecular signaling) that led nearby axons (often smaller ones) to take it over.

A 67 second, soundless video of one 1,000-step simulation of this process demonstrates the concepts behind this finding.

In the simulation, each circle represents a synaptic site, and each color an innervating axon. There are originally six colors.

At each of the 1,000 time steps, one axon is withdrawn from a randomly chosen site, and an adjacent one (possibly of the same color) takes it over.

The territory controlled by one axon increases (with negative acceleration) until it comes to dominate all the sites.

Although it is possible that a qualitatively different process occurs for axonal inputs to nerve cells, odds are that a similar sort of evolution via competition helps drive CNS phenomena such as memory. (Because evolution tends to re-use useful processes.)


Turney SG, Lichtman JW (2012) Reversing the Outcome of Synapse Elimination at Developing Neuromuscular Junctions In Vivo: Evidence for Synaptic Competition and Its Mechanism. PLoS Biol 10(6): e1001352. doi:10.1371/journal.pbio.1001352

Read Full Post »

It is quite common in biology (and neuroscience, as a special case) for researchers to employ differential gene expression analysis, which produces lists of up- and down-regulated genes between a given set of conditions. And as Ideker and Krogan point out in their Jan ’12 paper, this principle has already been extended to differential protein expression and post-translational modifications.

The authors go on to discuss how this approach has also been applied, with less fanfare, to differential interaction network analysis. In this paradigm, if an interaction between nodes (e.g., protein concentrations) in the network is present above noise in one condition, but not another, then they would call that a differential interaction.

"Static genetic interaction maps are measured in each of two conditions (left)... Condition 1 is subtracted from condition 2 to create a differential interaction map (right)... In the differential map, weak but dynamic interactions (dotted edges) are magnified and persistent ‘housekeeping’ interactions are removed (bottom right)." ; doi:10.1038/msb.2011.99

Very similar ideas can be applied to the study of neuronal network function. If we can say that an interaction between neuronal “nodes” (which could be, depending upon the scale, neurons, cortical columns, or brain regions) is differentially present between healthy and disordered states, then it suggests that that interaction is somehow involved with the disorder.

This is not a perfect paradigm, in part because the network “connections” can be less representative of the physical reality than we’d like, but I anticipate that we have much to mine from it about the operations of the nervous system.


Ideker T, Krogan NJ. 2012 Differential network biology. doi:10.1038/msb.2011.99

Read Full Post »

…[C]onsider the example … regarding the significant resources and time being put into deciphering the structural connectome of the brain. This massive amount of accumulating data is qualitative, and although everyone agrees it is important and necessary to have it in order to ultimately understand the dynamics of the brain that emerges from the structural substrate represented by the connectome, it is not at all clear at present how to achieve this. Although there have been some initial attempts at using this data in quantitative analyses they are essentially mostly descriptive and offer little insights into how the brain actually works. A reductionist’s approach to studying the brain, no matter how much we learn and how much we know about the parts that make it up at any scale, will by itself never provide an understanding of the dynamics of brain function, which necessarily requires a quantitative, i.e., mathematical and physical, context.

That’s Gabriel Silva, more here, interesting throughout.


Silva GA (2011) The need for the emergence of mathematical neuroscience: beyond computation and simulation. Front. Comput. Neurosci. 5:51. doi: 10.3389/fncom.2011.00051

Read Full Post »

Certain visual inputs can be consistently interpreted in more than one way. One classic example of this is the young-woman/old-woman puzzle:

"Boring figure", via Wikipedia user Bryan Derksen

An important finding related to these types of illusions is that we don’t perceive both possibilities at once, but rather switch spontaneously between them.

Buesing et al.’s recent study formalized a network model of spiking neurons, equivalent to sampling from a probability distribution, and used it on a quantifiable model of such visual ambiguity, binocular rivalry.

This allowed them to show how spontaneous switches between perceptual states can be caused by a sampling process which produces successively correlated samples.

In particular, they constructed a computational model with 217 neurons, and assigned each neuron a tuning curve with a preferred orientation such that the full set of orientations covered the entire 180° interval.

They then ran a simulation of these neurons according to their rules for spiking and refraction, computed the joint probability distribution, projected it in 2-d, and drew the endpoints of the projections as dots, shown below. They took samples every millisecond for 20 seconds of biological time.

the "prior distribution"; each colored dot is a sampled network state; the relative orientation of each dot corresponds to the primary orientation of the perception at that time point; a dot's distance from the origin encodes the perception's "strength"; doi:10.1371/journal.pcbi.1002211.g004 part d

Note that there is a fairly homogenous distribution across the whole orientation spectrum, indicating a lack of preference for one direction. You might think of the above as the resting state activity, as there was nothing to mimic external input to the system.

In order to add this input, the authors did another simulation in which they specified the states of a few of the neurons, “clamping” them to one value. In particular, they clamped two neurons with orientation preference ~45° to 1 (“firing”), two neurons with preference ~135° to 1, and four cells with preference ~90° to 0 (“not firing”).

Since the neurons set to firing are at opposite sides of the semicircle, this set-up mimics an ambiguous visual state. They then ran a simulation with the remaining 209 neurons as above, with the results shown below.

the "posterior distribution"; the black line shows the evolution of the network states z for 500 ms during a switch in perceptual state; doi:10.1371/journal.pcbi.1002211.g004 part e

As you can see, in this case the network samples preferentially from states that correspond to the clamped positions at either ~45° or ~135°. The black trace indicates that the network tends to remain in one high probability state for awhile and then shift rapidly to the other.

As compared to the above “prior” distribution, this “posterior” distribution has greatly reduced variance.

Although the ability of their network to explain perceptual bistability is fascinating, it is perhaps most interesting due to its broader implications for how cortical regions might be able to switch between cognitive states via sampling.


Buesing L, Bill J, Nessler B, Maass W (2011) Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLoS Comput Biol 7(11): e1002211. doi:10.1371/journal.pcbi.1002211

Read Full Post »

Kim et al were interested in simulating the compartmentalization of signaling molecules involved in PKA-dependent LTP in the hippocampus. They wanted to know: does PKA need to be anchored near its target molecules, or near a source of activator molecules? They varied the location of PKA and one of its activator molecules (adenylyl cyclase) to try to determine this. It turns out that placing PKA near its activator molecules (i.e., source of cAMP) leads to more downstream activity than placing it near its target molecules.

Also of interest is a very cool model of a dendrite with one spine that they used for their stochastic simulations, which is below and could help you visualize these structures:

dotted lines are subvolumes used in the simulation; Ca influx is via voltage dependent calcium channels in the dendrite, and via NMDA receptors in the spine's post-synaptic density (PSD); ignore "C"; doi:10.1371/journal.pcbi.1002084

Since molecular simulation is computationally expensive, they only allow diffusion in 2-D in the dendrite (notice the lack of vertical dotted lines in the cross section) and 1-D in the spine. Further improvements to molecular simulation or computational power should one day make this sort of simplification unnecessary.


Kim M, Park AJ, Havekes R, Chay A, Guercio LA, et al. (2011) Colocalization of Protein Kinase A with Adenylyl Cyclase Enhances Protein Kinase A Activity during Induction of Long-Lasting Long-Term-Potentiation. PLoS Comput Biol 7(6): e1002084. doi:10.1371/journal.pcbi.1002084

Read Full Post »

Older Posts »