Archive for the ‘Theoretical Neuroscience’ Category

As one of my manifestations of intellectual contrarianism, I like to collect historical examples of times when a largish group of scientists thought that a complicated theory was the best way to explain a set of facts, but then a more simple explanation turned out to be much better.

I especially like examples of this in neuroscience, where people are wont to postulate complicated theories about the way that we think.

There is perhaps no better example than the debate between the reticular theory of the nervous system and the neuron doctrine.

The reticular theory postulated a form of exceptionalism in the nervous system: that axons and dendrites seen on light microscopy were not attached to cells but were in fact a separate, non-cellular entity, forming their own protoplasmic network.

The neuron doctrine is, at least in hindsight, much simpler, postulating that axons and dendrites are extensions of cells, as occurs in other types of biology.


Cajal’s drawing of neurons in the chick cerebellum, from Wikipedia

The reticular theory had many proponents, including Camillo Golgi and Franz Nissl, and lasted from 1840-1935. It’s easy to dismiss it now, but it was a reasonable idea at the time.

Now, though, it’s an good example of how theories that postulate that the brain is extremely complicated and different than other types of biology do not have a good track record.

Read Full Post »

In investigating a crime, to pinpoint the culprit, the saying goes, “follow the money.” In science, the saying is (or at least, should be), “follow the ATP.”

A six month old paper acts as a nice review on this topic. The authors stratify tissue types based on the degree of myelination (none, developing, and adult). This is shown here,


  • action potential use is on voltage-gated Na+/K+-ATPases
  • synapse use is on postsynaptic membrane currents, presynaptic calcium entry, and neurotransmitter/vesicle cycling
  • oligodendrocyte resting potential use is continuous Na+/K+ pumps
  • housekeeping use is on protein/lipid synthesis and intracellular trafficking of molecules/organelles

That’s way more than I would have expected on housekeeping. But by far their most surprising finding is that the cost of maintaining the resting potentials in oligodendrocytes is so large that myelination doesn’t usually save energy on net–it depends on the firing rate of the neuron. That’s a heterodox bomb.

I suppose that myelination not leading to energy saving is weak evidence in favor of it doing something else, aside from speeding up spikes. Like, allowing for plasticity.


Harris; Atwood (2012). “The Energetics of CNS White Matter”Journal of NeuroscienceDOI:10.1523/JNEUROSCI.3430-11.2012

Read Full Post »

During development, one axon typically comes to dominate each set of synaptic sites at a neuromuscular junction. This means that just one neuron controls each muscle fiber, allowing for specificity of motor function.

A nice application of laser irradiation allows researchers to intervene in the formation of axonal branches in developing mice to study this.

What they found was that irradiating the axon currently occupying the site spurred a sequence of events (presumably involving molecular signaling) that led nearby axons (often smaller ones) to take it over.

A 67 second, soundless video of one 1,000-step simulation of this process demonstrates the concepts behind this finding.

In the simulation, each circle represents a synaptic site, and each color an innervating axon. There are originally six colors.

At each of the 1,000 time steps, one axon is withdrawn from a randomly chosen site, and an adjacent one (possibly of the same color) takes it over.

The territory controlled by one axon increases (with negative acceleration) until it comes to dominate all the sites.

Although it is possible that a qualitatively different process occurs for axonal inputs to nerve cells, odds are that a similar sort of evolution via competition helps drive CNS phenomena such as memory. (Because evolution tends to re-use useful processes.)


Turney SG, Lichtman JW (2012) Reversing the Outcome of Synapse Elimination at Developing Neuromuscular Junctions In Vivo: Evidence for Synaptic Competition and Its Mechanism. PLoS Biol 10(6): e1001352. doi:10.1371/journal.pbio.1001352

Read Full Post »

It is quite common in biology (and neuroscience, as a special case) for researchers to employ differential gene expression analysis, which produces lists of up- and down-regulated genes between a given set of conditions. And as Ideker and Krogan point out in their Jan ’12 paper, this principle has already been extended to differential protein expression and post-translational modifications.

The authors go on to discuss how this approach has also been applied, with less fanfare, to differential interaction network analysis. In this paradigm, if an interaction between nodes (e.g., protein concentrations) in the network is present above noise in one condition, but not another, then they would call that a differential interaction.

"Static genetic interaction maps are measured in each of two conditions (left)... Condition 1 is subtracted from condition 2 to create a differential interaction map (right)... In the differential map, weak but dynamic interactions (dotted edges) are magnified and persistent ‘housekeeping’ interactions are removed (bottom right)." ; doi:10.1038/msb.2011.99

Very similar ideas can be applied to the study of neuronal network function. If we can say that an interaction between neuronal “nodes” (which could be, depending upon the scale, neurons, cortical columns, or brain regions) is differentially present between healthy and disordered states, then it suggests that that interaction is somehow involved with the disorder.

This is not a perfect paradigm, in part because the network “connections” can be less representative of the physical reality than we’d like, but I anticipate that we have much to mine from it about the operations of the nervous system.


Ideker T, Krogan NJ. 2012 Differential network biology. doi:10.1038/msb.2011.99

Read Full Post »

…[C]onsider the example … regarding the significant resources and time being put into deciphering the structural connectome of the brain. This massive amount of accumulating data is qualitative, and although everyone agrees it is important and necessary to have it in order to ultimately understand the dynamics of the brain that emerges from the structural substrate represented by the connectome, it is not at all clear at present how to achieve this. Although there have been some initial attempts at using this data in quantitative analyses they are essentially mostly descriptive and offer little insights into how the brain actually works. A reductionist’s approach to studying the brain, no matter how much we learn and how much we know about the parts that make it up at any scale, will by itself never provide an understanding of the dynamics of brain function, which necessarily requires a quantitative, i.e., mathematical and physical, context.

That’s Gabriel Silva, more here, interesting throughout.


Silva GA (2011) The need for the emergence of mathematical neuroscience: beyond computation and simulation. Front. Comput. Neurosci. 5:51. doi: 10.3389/fncom.2011.00051

Read Full Post »

Certain visual inputs can be consistently interpreted in more than one way. One classic example of this is the young-woman/old-woman puzzle:

"Boring figure", via Wikipedia user Bryan Derksen

An important finding related to these types of illusions is that we don’t perceive both possibilities at once, but rather switch spontaneously between them.

Buesing et al.’s recent study formalized a network model of spiking neurons, equivalent to sampling from a probability distribution, and used it on a quantifiable model of such visual ambiguity, binocular rivalry.

This allowed them to show how spontaneous switches between perceptual states can be caused by a sampling process which produces successively correlated samples.

In particular, they constructed a computational model with 217 neurons, and assigned each neuron a tuning curve with a preferred orientation such that the full set of orientations covered the entire 180° interval.

They then ran a simulation of these neurons according to their rules for spiking and refraction, computed the joint probability distribution, projected it in 2-d, and drew the endpoints of the projections as dots, shown below. They took samples every millisecond for 20 seconds of biological time.

the "prior distribution"; each colored dot is a sampled network state; the relative orientation of each dot corresponds to the primary orientation of the perception at that time point; a dot's distance from the origin encodes the perception's "strength"; doi:10.1371/journal.pcbi.1002211.g004 part d

Note that there is a fairly homogenous distribution across the whole orientation spectrum, indicating a lack of preference for one direction. You might think of the above as the resting state activity, as there was nothing to mimic external input to the system.

In order to add this input, the authors did another simulation in which they specified the states of a few of the neurons, “clamping” them to one value. In particular, they clamped two neurons with orientation preference ~45° to 1 (“firing”), two neurons with preference ~135° to 1, and four cells with preference ~90° to 0 (“not firing”).

Since the neurons set to firing are at opposite sides of the semicircle, this set-up mimics an ambiguous visual state. They then ran a simulation with the remaining 209 neurons as above, with the results shown below.

the "posterior distribution"; the black line shows the evolution of the network states z for 500 ms during a switch in perceptual state; doi:10.1371/journal.pcbi.1002211.g004 part e

As you can see, in this case the network samples preferentially from states that correspond to the clamped positions at either ~45° or ~135°. The black trace indicates that the network tends to remain in one high probability state for awhile and then shift rapidly to the other.

As compared to the above “prior” distribution, this “posterior” distribution has greatly reduced variance.

Although the ability of their network to explain perceptual bistability is fascinating, it is perhaps most interesting due to its broader implications for how cortical regions might be able to switch between cognitive states via sampling.


Buesing L, Bill J, Nessler B, Maass W (2011) Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLoS Comput Biol 7(11): e1002211. doi:10.1371/journal.pcbi.1002211

Read Full Post »

Kim et al were interested in simulating the compartmentalization of signaling molecules involved in PKA-dependent LTP in the hippocampus. They wanted to know: does PKA need to be anchored near its target molecules, or near a source of activator molecules? They varied the location of PKA and one of its activator molecules (adenylyl cyclase) to try to determine this. It turns out that placing PKA near its activator molecules (i.e., source of cAMP) leads to more downstream activity than placing it near its target molecules.

Also of interest is a very cool model of a dendrite with one spine that they used for their stochastic simulations, which is below and could help you visualize these structures:

dotted lines are subvolumes used in the simulation; Ca influx is via voltage dependent calcium channels in the dendrite, and via NMDA receptors in the spine's post-synaptic density (PSD); ignore "C"; doi:10.1371/journal.pcbi.1002084

Since molecular simulation is computationally expensive, they only allow diffusion in 2-D in the dendrite (notice the lack of vertical dotted lines in the cross section) and 1-D in the spine. Further improvements to molecular simulation or computational power should one day make this sort of simplification unnecessary.


Kim M, Park AJ, Havekes R, Chay A, Guercio LA, et al. (2011) Colocalization of Protein Kinase A with Adenylyl Cyclase Enhances Protein Kinase A Activity during Induction of Long-Lasting Long-Term-Potentiation. PLoS Comput Biol 7(6): e1002084. doi:10.1371/journal.pcbi.1002084

Read Full Post »

In their lucid and educational ’09 paper, Nordlie et al attempt to create standards for the description of neural network models in the academic lit. This is a great idea–gains from standardization are huge–and also a great paper to learn about what a neural network model actually entails. Since this is in PLoS comp bio and, bless its editors, it is OA/CC, I will quote liberally. First, they have the following working definition of a model:

A neuronal network model is an explicit and specific hypothesis about the structure and microscopic dynamics of (a part of) the nervous system.


  • The model must be explicit, i.e., all aspects of the model must be specified.
  • The model must be specific, i.e., all aspects must be defined so detailed that they can be implemented unequivocally.
  • The model specifies the structure (placement and type of network elements; source, target and type of connections) and dynamics of components (ion channels, membrane potential, spike generation and propagation).
  • The model does not describe the dynamics of the model as a whole, which is an emerging property of the model.

Here is how their full description of what a model must entail:

A complete model description must cover at least the following three components: (i) The network architecture, i.e., the composition of the network from areas, layers, or neuronal sub-populations. (ii) The network connectivity, describing how neurons are connected among each other in the network. In most cases, connectivity will be given as a set of rules for generating the connections. (iii) The neuron and synapse models used in the network model, usually given by differential equations for the membrane potential and synaptic currents or conductances, rules for spike generation and post-spike reset. Model descriptions should also contain information about (iv) the input (stimuli) applied to the model and (v) the data recorded from the model, just as papers in experimental neuroscience do, since a reproduction of the simulations would otherwise become impossible.

The above is essential to a neural network model, while below are some of the useful steps for describing your model:

1) Include output data for each individual neuron type to test stimuli, as opposed to responses only from the whole network. This will avoid the scenario under which:

[R]esearchers who attempt to re-implement a model and find themselves unable to reproduce the results from a paper, will not be able to find out whether problems arise from neuron model implementations or from a wrong network setup.

2) Keep the description of your model and the explanation for why you chose your model separate, for the sake of clarity.

3) Describe the topology of the network in your model unambiguously. It may be best to describe this topology on basis of how the regions connect to one another. Or, if your network is of the human brain and is at a high enough level, you could use a publicly available, standard space, such as the one that the human connectome project should soon release.

4) In defining the connections between your neurons (i.e., how they are probabilistically generated), pay special attention to these three details:

  • May neurons connect to themselves?
  • May there be multiple connections between any pair of neurons?
  • Are connection targets chosen at random for a fixed sender neuron (divergent connection), senders chosen at random for fixed target (convergent connection), or are sender and receiver chosen at random for each connection?

One benefit of connectomics research is that it would allow neural networks to be run on real, validated data sets instead of on probabilistic connections, simplifying these descriptions.

5) Figures should be informative but not overwhelming. Nordlie et al draw a model of the thalamocortical pathway using diagram styles from three of the papers they surveyed, here:


The middle diagram is the most informative, as it has parameters (weights and probabilities) shown next to its connection lines, and line widths proportional to the product of weight and probability. Really, what would be ideal here is some sort of standardization, like in physics diagrams. (A little physics envy isn’t always a bad! thing) In particular, these are their suggestions:

  • Unordered populations are shown as circles;
  • Populations with spatial structure are shown as rectangles;
  • Pointed arrowheads represent excitatory, round ones inhibitory connections;
  • Arrows beginning/ending outside a population indicate that the arrows represent a set of connections with source/target neurons selected from the population;
  • Probabilistic connection patterns are shown as cut-off masks filled with connection probability as grayscale gradient; the pertaining arrows end on the outside of the mask.

6) Describe the equations for membrane potential, spike generation, spike detection, reset and refractory behavior using math as well as prose.

Anecdotally, it seems to me that systems biologists tend to use R while neuroscientists are more into MATLAB. This jives with the engineering feel of the neuro community, and I certainly don’t mean to start a programming language flame war, but I do wonder if moving towards to the open-access programs R or Python might be useful.

I truly learned a lot from this paper, and in case the authors ever read this post, I’d like to thank them for putting effort into writing it so carefully and clearly, and apologize for any mistakes I may have made in summarizing it.


Nordlie E, Gewaltig M-O, Plesser HE (2009) Towards Reproducible Descriptions of Neuronal Network Models. PLoS Comput Biol 5(8): e1000456. doi:10.1371/journal.pcbi.1000456

Read Full Post »

Cahalane et al studied the hamster during its first 10 days of development, using a fluorescent dye to trace the growth of its axons.

One thing they noticed was a bias towards movement in the medial/lateral as opposed to and anterior/posterior axes of the flattened cortical hemisphere. They quantify this bias as anisotropy, using two delta functions defined on a circle. (Anisotropy is big in diffusion tensor imaging, too). When they compare the anisotropy in the development of grey matter vs white matter, they find that white matter is more anisotropic:

grey matter / cortex = gray dots; white mater = red triangles; doi:10.1371/journal.pone.0016113

By analyzing simulated networks they show the effects of anisotropy on growing axon connections to other nodes:

Each point is avg of 10 networks, each with 2500 nodes, 10 axons, and 1 mm avg axon length; doi:10.1371/journal.pone.0016113

They also consider the modularity of their networks. Formally, modules are non-overlapping communities delineated by their location. If chosen well, there should be more within- than between-community edges in a given module than expected due to chance. The authors find good evidence for modularity in their axon traces, mainly because there are so many short connections, which are increased when axons are more anisotropic.

This is a great way to quantify networks, and it would be nice to see this type of structural data correlated with function. For example, how do more modular networks act? One suggestion is that modular structures might lead to more specialization in sub-problems, increasing rapid adaptation to a specified goal. More modular tasks may take less effort, whereas more global tasks like working memory would take more effort.

This makes sense, but what’s the trade-off or downside to modularity? If modularity is so good, why isn’t the brain more modular? Possibly because given finite resources, specialization is antagonistic to plasticity.


Cahalane DJ, Clancy B, Kingsbury MA, Graf E, Sporns O, et al. (2011) Network Structure Implied by Initial Axon Outgrowth in Rodent Cortex: Empirical Measurement and Models. PLoS ONE 6(1): e16113. doi:10.1371/journal.pone.0016113

Meunier D, et al. 2010 Modular and hierarchically modular organization of brain networks. Frontiers in Neuro, link.

Read Full Post »

Generally, components of a system can deviate from optimality at different rates. To visualize this, think of a two component system, with x1 and x2. Imagine that x1 has a higher probability of being in a non-optimal state, or in other words, has a more slowly decreasing objective function:

on the left the region of high prob is wider for x1 because the objective function decreases more slowly, on the right are contour plots, so the lines have equal value; doi: 10.1073/pnas.0905336106

Perez-Escudero et al (’09) were interested in the deviations from the minimum wiring configuration in the current connectome of C. elegans. Their assumption for optimality is that neurons should be in positions that minimize the cost of the “edge” between them. This is their objective function.

First they calculate the deviation of each neuron’s position from its position in the theoretical minimum wiring config. Then they show that neurons with fewer wires or “connections” to other neurons tend to have smaller deviations. This makes sense because the cost of their deviation from optimality is lower.

A = neuron positions on the line indicate no deviation from optimality, B = blue line is an inverse quadratic fit, indicating that deviations from optimality have an parabolic cost w/r/t number of connections, C = random redistribution of the deviations of neuron positions from optimum, note only 0.033% of permutations have a lower cost ; doi: 10.1073/pnas.0905336106


They say that ~ 15% of C. elegans neurons have significant deviations from optimality. Additional analysis reveals that some of the neurons deviate from optimality due to local minima in the cost of wiring, which is a common tendency in evolved systems. This analysis is very interesting, and one of the reasons it is able to be done is because the connectome of C. elegans has been partially solved.


Perez-Escudero A, et al. 2009 Structure of deviations from optimality in biological systems, PNAS, doi: 10.1073/pnas.0905336106.

Read Full Post »

Older Posts »