Developmental axon anisotropy of the hamster

Cahalane et al studied the hamster during its first 10 days of development, using a fluorescent dye to trace the growth of its axons.

One thing they noticed was a bias towards movement in the medial/lateral as opposed to and anterior/posterior axes of the flattened cortical hemisphere. They quantify this bias as anisotropy, using two delta functions defined on a circle. (Anisotropy is big in diffusion tensor imaging, too). When they compare the anisotropy in the development of grey matter vs white matter, they find that white matter is more anisotropic:

grey matter / cortex = gray dots; white mater = red triangles; doi:10.1371/journal.pone.0016113

By analyzing simulated networks they show the effects of anisotropy on growing axon connections to other nodes:

Each point is avg of 10 networks, each with 2500 nodes, 10 axons, and 1 mm avg axon length; doi:10.1371/journal.pone.0016113

They also consider the modularity of their networks. Formally, modules are non-overlapping communities delineated by their location. If chosen well, there should be more within- than between-community edges in a given module than expected due to chance. The authors find good evidence for modularity in their axon traces, mainly because there are so many short connections, which are increased when axons are more anisotropic.

This is a great way to quantify networks, and it would be nice to see this type of structural data correlated with function. For example, how do more modular networks act? One suggestion is that modular structures might lead to more specialization in sub-problems, increasing rapid adaptation to a specified goal. More modular tasks may take less effort, whereas more global tasks like working memory would take more effort.

This makes sense, but what’s the trade-off or downside to modularity? If modularity is so good, why isn’t the brain more modular? Possibly because given finite resources, specialization is antagonistic to plasticity.

Reference

Cahalane DJ, Clancy B, Kingsbury MA, Graf E, Sporns O, et al. (2011) Network Structure Implied by Initial Axon Outgrowth in Rodent Cortex: Empirical Measurement and Models. PLoS ONE 6(1): e16113. doi:10.1371/journal.pone.0016113

Meunier D, et al. 2010 Modular and hierarchically modular organization of brain networks. Frontiers in Neuro, link.

Non-optimality in C. elegans connectome

Generally, components of a system can deviate from optimality at different rates. To visualize this, think of a two component system, with x1 and x2. Imagine that x1 has a higher probability of being in a non-optimal state, or in other words, has a more slowly decreasing objective function:

on the left the region of high prob is wider for x1 because the objective function decreases more slowly, on the right are contour plots, so the lines have equal value; doi: 10.1073/pnas.0905336106

Perez-Escudero et al (’09) were interested in the deviations from the minimum wiring configuration in the current connectome of C. elegans. Their assumption for optimality is that neurons should be in positions that minimize the cost of the “edge” between them. This is their objective function.

First they calculate the deviation of each neuron’s position from its position in the theoretical minimum wiring config. Then they show that neurons with fewer wires or “connections” to other neurons tend to have smaller deviations. This makes sense because the cost of their deviation from optimality is lower.

A = neuron positions on the line indicate no deviation from optimality, B = blue line is an inverse quadratic fit, indicating that deviations from optimality have an parabolic cost w/r/t number of connections, C = random redistribution of the deviations of neuron positions from optimum, note only 0.033% of permutations have a lower cost ; doi: 10.1073/pnas.0905336106

 

They say that ~ 15% of C. elegans neurons have significant deviations from optimality. Additional analysis reveals that some of the neurons deviate from optimality due to local minima in the cost of wiring, which is a common tendency in evolved systems. This analysis is very interesting, and one of the reasons it is able to be done is because the connectome of C. elegans has been partially solved.

Reference

Perez-Escudero A, et al. 2009 Structure of deviations from optimality in biological systems, PNAS, doi: 10.1073/pnas.0905336106.

Uses of noise in neuron-neuron communications

This is a hot topic, and in the past week two papers have brought new perspectives:

1) Cafaro and Reike (here) discuss how correlated noise between inhibitory and excitatory inputs are necessary for neurons to effectively integrate those signals. They show this by recording the activity of retinal neurons in response to light.

As a control, they find cross-correlations in signal activity (using MATLAB’s xcov) when the two neurons are recorded simultaneously, but not non-simultaneously. Then, they estimate variability in synaptic responses by subtracting the average synaptic input from each individual trial. The peak correlation of these residuals ranges from 0.15 to 0.5 when the signals are recorded simultaneously. Noise correlations in non-simultaneous recordings were much smaller, but nonzero, which they attribute to a slow drift in the stimuli response.

Finally, when they eliminate this correlated noise, it decreases the accuracy of the neuron’s spiking responses to the light input.

2) Schwalger et al (here) study the impact of noise on neural interspike interval stats. In particular, they distinguish between two different types of noise: 1) fast fluctuation noise, which comes mainly from ion channel noise, due largely to the speed of ion conductance at the synapse, or 2) slow adaptation, which could come from calcium fluctuations in calcium-gated potassium currents, like BK channels.

As you can see below, simulations show that these types of noise produce different of interspike interval histograms. In particular, the model which includes slow adaptation noise, linear and B and log-log in D, cannot be fit by the inverse gaussian distribution.

determ. adap. = dominated by noise type 1, fast fluctuations; channel model = dominated by noise type 2, slow stochastic adaptation; theory = fit by their "colored noise approximation", which takes into account the slow adaptation noise

As you can see, simulations dominated by slow adaptation noise yielded distributions with higher skewness and kurtosis. The authors suggest that this might allow interspike interval recordings to delineate the main contributor to noise in a given neuron or class of neurons.

Noise is inevitable in biological systems, and this is especially the case in neurons. The first paper shows one way organisms are able to not only cope with noise but actually make use of it. The second paper suggests that some neurons, depending on the sources of their noise, might have qualitatively different rates of spiking.

References

Cafaro J et al. 2010 Noise correlations improve response fidelity and stimulus encoding. doi:10.1038/nature09570

Schwalger T, Fisch K, Benda J, Lindner B (2010) How Noisy Adaptation of Neurons Shapes Interspike Interval Histograms and Correlations. PLoS Comput Biol 6(12): e1001026. doi:10.1371/journal.pcbi.1001026

Measuring causalilty in neural connections

Distinguishing correlation from causation in signal activity can be very useful in describing how brain circuits work. The reasons are fairly obvious. In particular, it allows you to say whether a given connection is “upstream” or “downstream” in a particular network, and thus understand how the signal propagates.

Singh and Lesica have developed a new model for describing the relationship between pairs of neurons based on dependencies in their spike activity. They call their model “incremental mutual information.”

The algorithm is meant to work on time series data, and it follows these fairly intuitive steps:

1) At a given time point, measure the entropy of neuron X after conditioning on a) the past / future values of X, and b) the past / future values of neuron Y, relative to some defined delay of interest.

2) Measure the entropy of neuron X, conditioning on the same values as in (1), but also conditioning on the value of Y at the delay of interest.

3) Subtract (2) from (1) to find the reduction in entropy that occurs from considering the value of Y at the delay of interest.

4) Normalize (3) as a fraction of its maximum possible value.

Here is one such two-neuron model, in which Y drives X with a strong static connection and a delay of 4 discretized time units:

delta = delay between Y's action and X's response, n= time point of interest

They simulated 1,000,000+ data points with the above model, varying the delay time at which the statistical dependencies were calculated. As you can see below, the iterated mutual information model shows a much sharper peak at delay = 4 samples, where it should peak once you account for the added Gaussian noise:

x axis = various delay amounts, IMI = iterated mutual information, corr coeff. = cross correlation function calculation, the "standard" model that they compare

Although their approach shows improvements over the standard cross correlation function, it might have been nice to see comparisons to the other possible approaches the authors mention in the introduction, namely Granger causality and transfer entropy. The other downside to these model-free approaches is that they can’t easily be applied to large populations. Nevertheless, improvements in this field will be very important for modeling circuits given physiological data, and we’ll continue to track progress here.

Reference

Singh A, Lesica NA (2010) Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits. PLoS Comput Biol 6(12): e1001035. doi:10.1371/journal.pcbi.1001035, link.

Variability between synaptic buttons as modulated by structural differences

The neurotransmitter release dynamics of the CA3-CA1 hippocampal synapse represent a commonly studied system. Nadkarni et al recently published a paper on the dynamics of en passant axon buttons that is pretty computationally intensive. Of interest are the button parameters they included in their model, which were:

  • voltage-dependent calcium channels (VDCC), numbering between 1 and 208 units, with the max radius of the cluster = 66 nm
  • plasma membrane calcium ATPase pumps (keep the calcium level at 100 nM), surface density is 180 μm^-2
  • calbindin (the 28,000 dalton form), the mobile intracellular calcium buffer that modifies the calcium diffusion rate (~ 50 μm^2/s)
  • an active zone with (7) docked vesicles, each with its own synaptotagmin calcium sensor,
  • the distance between the active zone and the voltage-dependent calcium channels, marked as lc, and varying in length between 10 and 400 nm
  • length of en passant synapse = 0.5 μm high /wide and 4 μm long

In their model, when an action potential arrives at the synapses, the voltage-dependent calcium channels open with some probability. Here, the amount of calcium that enters the button depends on the time course of the action potential, the number of VDCCs present on the membrane, the calcium conductance of open channels, and the total time each of the channels remains open. Calcium in the button can bind to calbiding, calcium pumps, or calcium sensors. If enough binds to the calcium sensors, the synaptotagmin sensor transitions to an active state and the vescicle containing the neurotransmitter is released. Moreover, each of these components has association and dissociation rates that can be tweaked.

Here is the visual display of their model with the above parameters included:

arXiv #1004.2009v1, VDCCs = voltage dependent calcium channels, PMCA = plasma membrane calcium ATPase pumps, lc = distance between VDCC cluster and active zone vescicles

Now, variability in the parameters (number, length, distance between) of these components ought to affect the release probability and kinetics of vescicles. They found this to be the case. For example, in their model, decreasing the distance between the active zone and the voltage-dependent calcium channels, marked as lc, decreases the sensitivity of neurotransmitter release probability to calcium concentration:

arXiv, 1004.2009v1, calcium sensitivity of neurotransmitter release response for a range of distances, lc, between the calcium sensor and the voltage-dependent calcium channels

So, in order to fully simulate the function of actual synapses, you’d probably need to get some of this structural data! However, some of the parameters could probably be assumed to the same between synaptic buttons. For example, the quantity and kinetics of calbidin could probably be assumed to be about enough so that the diffusion constant of calcium remains ~ 50 μm^2/s.

Reference

Nardkari S, et al. 2010 Title: Spatial and Temporal Correlates of Vesicular Release at Hippocampal Synapses arXiv:1004.2009v1 [q-bio.NC].

Late phase synaptic plasticity tagged by increase in puncta sites

Our models of synaptic action are still very incomplete and will benefit from more empirical research. In this direction, Antonova et al (here) cultured relatively young ( 2 – 3 week old) hippocampal neurons from one day old rats. They placed the neurons in either glutamate in Mg2+-free bath solution or control solution. They then tagged synaptic puncta with fluorescent markers and visualized them with a laser confocal scanning system coupled to an inverted microscope. Synaptic puncta were indentified as between 0.5 and 5 micrometers in diameter.

As expected, applying the glutamate led to evoked EPSCs in the cultured neurons. There was a corresponding increase in the average quantity of synaptic puncta:

doi/10.1371/journal.pone.0007690.g008

Note that inhibiting protein translation via anisomycin caused a downregulation in the quantity of synaptic puncta mainly at 60 and 180 mins, but also a slight decrease at 10 mins. Here is the model they use to explain these changes:

squares = more stable sites, green = filled sites, glu = glutamate neurotransmission, aniso = protein translation inhibitor; doi/10.1371/journal.pone.0007690.g004

The rapid (< 10 min) outgrowth is probably due to the activation of dormant appositions, as it is actin polymerization-dependent but not protein translation-dependent. On the other hand the gradual (3 hr) increase might be due to an increase in actual appositions between pre and post synaptic neurons that is protein-translation dependent. In their conclusion they draw an analogy between this process and Hebbian learning on the individual puncta level:

In addition to the increase in sites, some existing presynaptic puncta and structures were stabilized and stopped disassembling and reassembling. These two processes are reminiscent of two stages of synaptogenesis during late stage development: exuberant growth of new synapses followed by activity-dependent stabilization of some and pruning of others. Stabilization of existing puncta and structures during potentiation suggests a Hebb-type activity-dependent learning rule, in which puncta and structures that are present during the induction of potentiation (and therefore might contribute to it) are made more permanent.

Reference

Antonova I, Lu F-M, Zablow L, Udo H, Hawkins RD (2009) Rapid and Long-Lasting Increase in Sites for Synapse Assembly during Late-Phase Potentiation in Rat Hippocampal Neurons. PLoS ONE 4(11): e7690. doi:10.1371/journal.pone.0007690

Cooperative synapses mediated by competition at dendritic branch

In the cortex, synapses occur in places where the postsynaptic dendritic branch are within the reach of a dendritic spine to a presynaptic axon terminal. The average spine length is around 2 μm; 1.8 in the CA1 / CA3 and 2.6 in V1 (see here). Regions in which this distance is small enough to conceivably make a connection are called “potential synapses.”

Synaptic connectivity between nearby neurons is somewhat rare. Thomson and Lamy call this connectivity the “hit rate” and estimated it in a bunch of different neuron types (see their awesome table here). They estimate hit rates ranging from ~ 0.01 to ~ 0.7 depending on neuron class, with a rough average of about 0.25.

However, most neuron combos that have any synapses have several synapses, which we wouldn’t expect from such a low hit rate if their formation was independent. And indeed, this aberrantly high proportional of multiple synapses holds true even after adjusting for synaptic compatibility.

Fares and Stepanyants propose a model of cooperativity. Connections with a number of synapses that exceed an adjustable critical number of synapses are stabilized, whereas connections with fewer than this number of synapses are degraded. Here is evidence that their model (green) can fit experimental data (red bars) of synapses from rat barrel cortex:

doi: 10.1073/pnas.0813265106

The authors speculate that the cooperativity may be mediated by competition among axons for connections to a given dendritic branch. The total number of connections along a dendritic branch could be regulated homeostatically such that if one connection forms another must be eliminated. Along with their critical number of synapses threshold, this would lead to positive feedback effects following to the formation of one synapse.

One note: Fares’s study only looked at synaptic connections of less than 50 μm, while most synapses are actually the result of longer-range connections.

References

Fares T, et al. 2009 Cooperative synapse formation in the neocortex. PNAS doi: 10.1073/pnas.0813265106

Thomson AM, et al. 2008 Functional Maps of Neocortical Local Circuitry. doi: 10.3389/neuro.01.1.1.002.2007

AMPA receptor diffusion trap following LTP induction

There are two ways that AMPA receptors can move in and out of the postsynaptic density found immediately opposite to the presynaptic terminal button. One is by lateral diffusion within the membrane, and the other is by endo / exocytosis from intracellular vescicles.

One of the phenomena that any model of receptor trafficking has to account for is the AMPA-receptor-mediated induction of LTP, which can take place in just 10 seconds. Some protein (it is not known which) is believed to confine the AMPA receptor’s diffusion to the post-synaptic density during LTP.

In order to account for LTP, Tolle and Novere model the AMPA receptors as remaining in the postsynaptic density due to their affinity to some scaffold binding molecule. They speculate that these scaffold molecules are already present in the postsynaptic density but must be activated by the Ca2+ influx through NMDA receptors upon glutamate release. This speculation is consistent with the finding that Ca2+ uncaging prevents AMPA receptors from diffusing (see here). Here is are two examples of AMPA receptor diffusion pathways, where maroon is free diffusion and green is confined to an area such as the postsynaptic density:

And here is a schematic diagram of their model:

yellow circle is postsynaptic density, AMPA receptors are orange cylinders, blue cone is presynaptic glutamate release site

This activation-dependent “diffusion trap” is a useful way to think about receptor diffusion and a good way of showing how rapidly postsynaptic effects can be realistically induced.

Reference

Brownian diffusion of AMPA receptors is sufficient to explain fast onset of LTP . Dominic P Tolle and Nicolas Le Novère. BMC Systems Biology 2010, 4:25doi:10.1186/1752-0509-4-25. Link here.

Rapid synaptogenesis following long term potentiation in adult hippocampus

Long term potentiation (LTP) is considered to be like the cellular mechanism for long term memory and is (justifiably) widely researched, with 9889 pub med hits. Using an innovative technique, Bourne and Harris have demonstrated how LTP correlates with the structural plasticity of dendrites. Briefly, this was their procedure:

First, they stimulated the axons of rat hippocampus slices in CA3 with concentric bipolar stimulating electrodes. These electrodes produce action potentials in all of all axons beneath them but can induce LTP selectively, allowing the researchers to compare control dendrites to those recieving LTP to isolate the effects of LTP. The electrodes emitted theta burst stimulation, which entails 8 “trains” delivered 30 seconds apart of 10 bursts per 2 seconds of 4 pulses at 100 Hz.* This frequency is apparently more resemblant of natural firing patterns in the hippocampus than the alternatives and allows for the back propogation of action potentials, which is important for structural plasticity.

Second, they determined which slices had the proper physiological characteristics of LTP, which are: 1) a dose-dependent increase of output current as the input stimulating electrode intensity is increased, 2) a lack of LTP at the control site, and 3) an induction of LTP at the correct site, as indicated by an increase in the slope of the field excitatory post synaptic potential. They split their tissues into three groups characterized by the amount of time in the experimenter after the induction of LTP: 5 mins, 30 mins, and 120 mins. Hippocampal slices matching all of the three characterstics were “fixed” by immersion in mixed aldehydes within seconds of the end of these time frames.

Third, they vibra-sliced the tissues across their width at 70 micrometer thickness and determined which tissues were preserved to high quality, as indicated by the clear identification of the proper intracellular components (i.e. microtubules of dendrites, cristae of mitochondria). Tissues that passed this anatomical criteria were sectioned, at ~ 38 to 60 nm. The researchers visualized the CA1 region of these tissues with transmission electron microscopy (TEM).

The time-dependent differences in anatomy following LTP that they found were fascinating. Dendrites from tissues 5 / 30 mins after LTP generally had more of the transitional structures associated with synaptogenesis including asymmetric shaft synapses, stubby spines, and nonsynaptic filopodia. But these changes were gone in the dendrites of the 120 mins samples. Instead, the dendrites of the samples 120 mins after LTP had a significant increase in postsynaptic density, i.e. in one subset, an increase in average area from ~ 17 +/- 1.5 square micrometers in the non-LTP sample to ~ 22 +/- 1.5 square micrometers in the LTP sample. This is evidence for the theory that the first initial changes in synaptic strength following LTP are underlied by transient changes, such as synapse unsilencing and AMPA trafficking, but later the changes are mediated by an increase in synaptic area.

Another time-dependent measure that the researchers were able to look at was the location and magnitude of polyribosome upregulation following LTP. Ribosomes are ~ 18-25 nm in diameter and are connected in polyribosomes by a grey fuzz which can be detected by TEM. At 5 mins after LTP, in the head and neck of dendrites, polyribosomes were upregulated from ~ 0.09 +/- 0.02 polyribosomes per micrometer in the control group to 0.20 +/- 0.04 polyribosomes per micrometer in the LTP group. But at 120 mins after LTP, in the head and neck of dendrites, polyribosomes were downregulated from ~ 0.34 +/- 0.07 polyribosomes per micrometer in the control group to 0.10 +/- 0.02 polyribosomes per micrometer in the LTP group. The net control upregulation can be explained by supporting the ongoing formation of small dendritic spines. And although the LTP group had a overall downregulation by 120 mins, this makes sense in terms of localizing the polyribosomes to dendritic spines that have enlarged postsynaptic densities. So, ribosomes can be transported around CA1 dendrites fast.

Overall, the model of synaptic plasticity emerging from their data set and analysis is that dendrite segments can act as functional units. If an inhibitory or excitatory spine is lost, the remaining ones should increase in size to compensate by ~ 120 mins. Moreover, it’s possible that polyribosomes or other intracellular organelles in the dendrite could be responsible for the regulation of dendritic homeostasis independently of the cell’s nucleus. This study also shows the utility of the time-dependent ssTEM technique, one that will surely begin to recieve more use in the next decade as machine learning algorithms continue to make the data analysis easier.

* I might as well admit that I don’t really get this part of their experiment that well.

Reference

Bourne JN, Harris KM. 2010 Coordination of size and number of excitatory and inhibitory synapses results in a balanced structural plasticity along mature hippocampal CA1 dendrites during LTP. Hippocampus Early View. PubMed. Doi: DOI 10.1002/hipo.20768.

In vitro neuroscience competition to test input-output models?

Bosco Ho describes the most important findings from the past decade in computational structural biology and imparts this fascinating endeavor of which I was quite unaware:

Every 2 years, a whole bunch of computational structural biology labs effectively shut down for business, and throw every man, woman and workstation together to attempt to crack a set of problems. This same set of problems is simultaneously being attempted in labs all around the world, as researchers race against a clock to predict the 3 dimensional atomic structure of protein sequences published at the CASP protein-folding competition website.

We often say that science is a competition but is is astonishing to me how the computational structural biology community has embraced formally organized competitions such as CASP. Here, we have pure naked competition, complete with a scoring system, judges, and rankings that determine winners and losers. It has all the drama that you’d expect from a reality TV show: recriminations, anger and tears. And it has taken the field of protein folding much farther than anyone would have imagined 10 years ago…

The field of protein folding had been drifting along in some kind of crappy fitness valley and it wasn’t until CASP came along, that we could even define what protein folding was, in a concrete definitive way. In terms of protein folding, the targets of CASP could be used to define a good fitness function, which was enough to spur the field to scramble out of the valley and up a fitness peak.

Here’s a similar competition that one might design for in vitro anatomical neuroscience. Have some central organization, analogous to the CASP, decide on a set of cultured neurons to analyze.* The org would then perform a few tests on the cultured neurons to determine a few variables, including chemical analysis to determine the types of neurotransmitters used, western blotting / optical density readings to determine the relevant protein densities at each synapse, some sort of microscopy from various angles to determine cell shape and type, etc. The org would then perform various electrical inputs to each of the neurons at controlled time sequences and measure the outputs, but crucially, would not make this output data public. Competitors would be given the anatomical data and the magnitude of the electrical inputs and attempt to predict the output and activity of each neuron on this basis. This could help test and refine neuron models. What say ye? Doable?

* Probably start small in terms of numbers and in terms of neural complexity. They wouldn’t want too many endogenous oscillator neurons in the first few iterations!