Feeds:
Posts
Comments

Archive for the ‘Theoretical Neuroscience’ Category

In their lucid and educational ’09 paper, Nordlie et al attempt to create standards for the description of neural network models in the academic lit. This is a great idea–gains from standardization are huge–and also a great paper to learn about what a neural network model actually entails. Since this is in PLoS comp bio and, bless its editors, it is OA/CC, I will quote liberally. First, they have the following working definition of a model:

A neuronal network model is an explicit and specific hypothesis about the structure and microscopic dynamics of (a part of) the nervous system.

Where:

  • The model must be explicit, i.e., all aspects of the model must be specified.
  • The model must be specific, i.e., all aspects must be defined so detailed that they can be implemented unequivocally.
  • The model specifies the structure (placement and type of network elements; source, target and type of connections) and dynamics of components (ion channels, membrane potential, spike generation and propagation).
  • The model does not describe the dynamics of the model as a whole, which is an emerging property of the model.

Here is how their full description of what a model must entail:

A complete model description must cover at least the following three components: (i) The network architecture, i.e., the composition of the network from areas, layers, or neuronal sub-populations. (ii) The network connectivity, describing how neurons are connected among each other in the network. In most cases, connectivity will be given as a set of rules for generating the connections. (iii) The neuron and synapse models used in the network model, usually given by differential equations for the membrane potential and synaptic currents or conductances, rules for spike generation and post-spike reset. Model descriptions should also contain information about (iv) the input (stimuli) applied to the model and (v) the data recorded from the model, just as papers in experimental neuroscience do, since a reproduction of the simulations would otherwise become impossible.

The above is essential to a neural network model, while below are some of the useful steps for describing your model:

1) Include output data for each individual neuron type to test stimuli, as opposed to responses only from the whole network. This will avoid the scenario under which:

[R]esearchers who attempt to re-implement a model and find themselves unable to reproduce the results from a paper, will not be able to find out whether problems arise from neuron model implementations or from a wrong network setup.

2) Keep the description of your model and the explanation for why you chose your model separate, for the sake of clarity.

3) Describe the topology of the network in your model unambiguously. It may be best to describe this topology on basis of how the regions connect to one another. Or, if your network is of the human brain and is at a high enough level, you could use a publicly available, standard space, such as the one that the human connectome project should soon release.

4) In defining the connections between your neurons (i.e., how they are probabilistically generated), pay special attention to these three details:

  • May neurons connect to themselves?
  • May there be multiple connections between any pair of neurons?
  • Are connection targets chosen at random for a fixed sender neuron (divergent connection), senders chosen at random for fixed target (convergent connection), or are sender and receiver chosen at random for each connection?

One benefit of connectomics research is that it would allow neural networks to be run on real, validated data sets instead of on probabilistic connections, simplifying these descriptions.

5) Figures should be informative but not overwhelming. Nordlie et al draw a model of the thalamocortical pathway using diagram styles from three of the papers they surveyed, here:

doi:10.1371/journal.pcbi.1000456

The middle diagram is the most informative, as it has parameters (weights and probabilities) shown next to its connection lines, and line widths proportional to the product of weight and probability. Really, what would be ideal here is some sort of standardization, like in physics diagrams. (A little physics envy isn’t always a bad! thing) In particular, these are their suggestions:

  • Unordered populations are shown as circles;
  • Populations with spatial structure are shown as rectangles;
  • Pointed arrowheads represent excitatory, round ones inhibitory connections;
  • Arrows beginning/ending outside a population indicate that the arrows represent a set of connections with source/target neurons selected from the population;
  • Probabilistic connection patterns are shown as cut-off masks filled with connection probability as grayscale gradient; the pertaining arrows end on the outside of the mask.

6) Describe the equations for membrane potential, spike generation, spike detection, reset and refractory behavior using math as well as prose.

Anecdotally, it seems to me that systems biologists tend to use R while neuroscientists are more into MATLAB. This jives with the engineering feel of the neuro community, and I certainly don’t mean to start a programming language flame war, but I do wonder if moving towards to the open-access programs R or Python might be useful.

I truly learned a lot from this paper, and in case the authors ever read this post, I’d like to thank them for putting effort into writing it so carefully and clearly, and apologize for any mistakes I may have made in summarizing it.

Reference

Nordlie E, Gewaltig M-O, Plesser HE (2009) Towards Reproducible Descriptions of Neuronal Network Models. PLoS Comput Biol 5(8): e1000456. doi:10.1371/journal.pcbi.1000456

Advertisements

Read Full Post »

Cahalane et al studied the hamster during its first 10 days of development, using a fluorescent dye to trace the growth of its axons.

One thing they noticed was a bias towards movement in the medial/lateral as opposed to and anterior/posterior axes of the flattened cortical hemisphere. They quantify this bias as anisotropy, using two delta functions defined on a circle. (Anisotropy is big in diffusion tensor imaging, too). When they compare the anisotropy in the development of grey matter vs white matter, they find that white matter is more anisotropic:

grey matter / cortex = gray dots; white mater = red triangles; doi:10.1371/journal.pone.0016113

By analyzing simulated networks they show the effects of anisotropy on growing axon connections to other nodes:

Each point is avg of 10 networks, each with 2500 nodes, 10 axons, and 1 mm avg axon length; doi:10.1371/journal.pone.0016113

They also consider the modularity of their networks. Formally, modules are non-overlapping communities delineated by their location. If chosen well, there should be more within- than between-community edges in a given module than expected due to chance. The authors find good evidence for modularity in their axon traces, mainly because there are so many short connections, which are increased when axons are more anisotropic.

This is a great way to quantify networks, and it would be nice to see this type of structural data correlated with function. For example, how do more modular networks act? One suggestion is that modular structures might lead to more specialization in sub-problems, increasing rapid adaptation to a specified goal. More modular tasks may take less effort, whereas more global tasks like working memory would take more effort.

This makes sense, but what’s the trade-off or downside to modularity? If modularity is so good, why isn’t the brain more modular? Possibly because given finite resources, specialization is antagonistic to plasticity.

Reference

Cahalane DJ, Clancy B, Kingsbury MA, Graf E, Sporns O, et al. (2011) Network Structure Implied by Initial Axon Outgrowth in Rodent Cortex: Empirical Measurement and Models. PLoS ONE 6(1): e16113. doi:10.1371/journal.pone.0016113

Meunier D, et al. 2010 Modular and hierarchically modular organization of brain networks. Frontiers in Neuro, link.

Read Full Post »

Generally, components of a system can deviate from optimality at different rates. To visualize this, think of a two component system, with x1 and x2. Imagine that x1 has a higher probability of being in a non-optimal state, or in other words, has a more slowly decreasing objective function:

on the left the region of high prob is wider for x1 because the objective function decreases more slowly, on the right are contour plots, so the lines have equal value; doi: 10.1073/pnas.0905336106

Perez-Escudero et al (’09) were interested in the deviations from the minimum wiring configuration in the current connectome of C. elegans. Their assumption for optimality is that neurons should be in positions that minimize the cost of the “edge” between them. This is their objective function.

First they calculate the deviation of each neuron’s position from its position in the theoretical minimum wiring config. Then they show that neurons with fewer wires or “connections” to other neurons tend to have smaller deviations. This makes sense because the cost of their deviation from optimality is lower.

A = neuron positions on the line indicate no deviation from optimality, B = blue line is an inverse quadratic fit, indicating that deviations from optimality have an parabolic cost w/r/t number of connections, C = random redistribution of the deviations of neuron positions from optimum, note only 0.033% of permutations have a lower cost ; doi: 10.1073/pnas.0905336106

 

They say that ~ 15% of C. elegans neurons have significant deviations from optimality. Additional analysis reveals that some of the neurons deviate from optimality due to local minima in the cost of wiring, which is a common tendency in evolved systems. This analysis is very interesting, and one of the reasons it is able to be done is because the connectome of C. elegans has been partially solved.

Reference

Perez-Escudero A, et al. 2009 Structure of deviations from optimality in biological systems, PNAS, doi: 10.1073/pnas.0905336106.

Read Full Post »

This is a hot topic, and in the past week two papers have brought new perspectives:

1) Cafaro and Reike (here) discuss how correlated noise between inhibitory and excitatory inputs are necessary for neurons to effectively integrate those signals. They show this by recording the activity of retinal neurons in response to light.

As a control, they find cross-correlations in signal activity (using MATLAB’s xcov) when the two neurons are recorded simultaneously, but not non-simultaneously. Then, they estimate variability in synaptic responses by subtracting the average synaptic input from each individual trial. The peak correlation of these residuals ranges from 0.15 to 0.5 when the signals are recorded simultaneously. Noise correlations in non-simultaneous recordings were much smaller, but nonzero, which they attribute to a slow drift in the stimuli response.

Finally, when they eliminate this correlated noise, it decreases the accuracy of the neuron’s spiking responses to the light input.

2) Schwalger et al (here) study the impact of noise on neural interspike interval stats. In particular, they distinguish between two different types of noise: 1) fast fluctuation noise, which comes mainly from ion channel noise, due largely to the speed of ion conductance at the synapse, or 2) slow adaptation, which could come from calcium fluctuations in calcium-gated potassium currents, like BK channels.

As you can see below, simulations show that these types of noise produce different of interspike interval histograms. In particular, the model which includes slow adaptation noise, linear and B and log-log in D, cannot be fit by the inverse gaussian distribution.

determ. adap. = dominated by noise type 1, fast fluctuations; channel model = dominated by noise type 2, slow stochastic adaptation; theory = fit by their "colored noise approximation", which takes into account the slow adaptation noise

As you can see, simulations dominated by slow adaptation noise yielded distributions with higher skewness and kurtosis. The authors suggest that this might allow interspike interval recordings to delineate the main contributor to noise in a given neuron or class of neurons.

Noise is inevitable in biological systems, and this is especially the case in neurons. The first paper shows one way organisms are able to not only cope with noise but actually make use of it. The second paper suggests that some neurons, depending on the sources of their noise, might have qualitatively different rates of spiking.

References

Cafaro J et al. 2010 Noise correlations improve response fidelity and stimulus encoding. doi:10.1038/nature09570

Schwalger T, Fisch K, Benda J, Lindner B (2010) How Noisy Adaptation of Neurons Shapes Interspike Interval Histograms and Correlations. PLoS Comput Biol 6(12): e1001026. doi:10.1371/journal.pcbi.1001026

Read Full Post »

Distinguishing correlation from causation in signal activity can be very useful in describing how brain circuits work. The reasons are fairly obvious. In particular, it allows you to say whether a given connection is “upstream” or “downstream” in a particular network, and thus understand how the signal propagates.

Singh and Lesica have developed a new model for describing the relationship between pairs of neurons based on dependencies in their spike activity. They call their model “incremental mutual information.”

The algorithm is meant to work on time series data, and it follows these fairly intuitive steps:

1) At a given time point, measure the entropy of neuron X after conditioning on a) the past / future values of X, and b) the past / future values of neuron Y, relative to some defined delay of interest.

2) Measure the entropy of neuron X, conditioning on the same values as in (1), but also conditioning on the value of Y at the delay of interest.

3) Subtract (2) from (1) to find the reduction in entropy that occurs from considering the value of Y at the delay of interest.

4) Normalize (3) as a fraction of its maximum possible value.

Here is one such two-neuron model, in which Y drives X with a strong static connection and a delay of 4 discretized time units:

delta = delay between Y's action and X's response, n= time point of interest

They simulated 1,000,000+ data points with the above model, varying the delay time at which the statistical dependencies were calculated. As you can see below, the iterated mutual information model shows a much sharper peak at delay = 4 samples, where it should peak once you account for the added Gaussian noise:

x axis = various delay amounts, IMI = iterated mutual information, corr coeff. = cross correlation function calculation, the "standard" model that they compare

Although their approach shows improvements over the standard cross correlation function, it might have been nice to see comparisons to the other possible approaches the authors mention in the introduction, namely Granger causality and transfer entropy. The other downside to these model-free approaches is that they can’t easily be applied to large populations. Nevertheless, improvements in this field will be very important for modeling circuits given physiological data, and we’ll continue to track progress here.

Reference

Singh A, Lesica NA (2010) Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits. PLoS Comput Biol 6(12): e1001035. doi:10.1371/journal.pcbi.1001035, link.

Read Full Post »

The neurotransmitter release dynamics of the CA3-CA1 hippocampal synapse represent a commonly studied system. Nadkarni et al recently published a paper on the dynamics of en passant axon buttons that is pretty computationally intensive. Of interest are the button parameters they included in their model, which were:

  • voltage-dependent calcium channels (VDCC), numbering between 1 and 208 units, with the max radius of the cluster = 66 nm
  • plasma membrane calcium ATPase pumps (keep the calcium level at 100 nM), surface density is 180 μm^-2
  • calbindin (the 28,000 dalton form), the mobile intracellular calcium buffer that modifies the calcium diffusion rate (~ 50 μm^2/s)
  • an active zone with (7) docked vesicles, each with its own synaptotagmin calcium sensor,
  • the distance between the active zone and the voltage-dependent calcium channels, marked as lc, and varying in length between 10 and 400 nm
  • length of en passant synapse = 0.5 μm high /wide and 4 μm long

In their model, when an action potential arrives at the synapses, the voltage-dependent calcium channels open with some probability. Here, the amount of calcium that enters the button depends on the time course of the action potential, the number of VDCCs present on the membrane, the calcium conductance of open channels, and the total time each of the channels remains open. Calcium in the button can bind to calbiding, calcium pumps, or calcium sensors. If enough binds to the calcium sensors, the synaptotagmin sensor transitions to an active state and the vescicle containing the neurotransmitter is released. Moreover, each of these components has association and dissociation rates that can be tweaked.

Here is the visual display of their model with the above parameters included:

arXiv #1004.2009v1, VDCCs = voltage dependent calcium channels, PMCA = plasma membrane calcium ATPase pumps, lc = distance between VDCC cluster and active zone vescicles

Now, variability in the parameters (number, length, distance between) of these components ought to affect the release probability and kinetics of vescicles. They found this to be the case. For example, in their model, decreasing the distance between the active zone and the voltage-dependent calcium channels, marked as lc, decreases the sensitivity of neurotransmitter release probability to calcium concentration:

arXiv, 1004.2009v1, calcium sensitivity of neurotransmitter release response for a range of distances, lc, between the calcium sensor and the voltage-dependent calcium channels

So, in order to fully simulate the function of actual synapses, you’d probably need to get some of this structural data! However, some of the parameters could probably be assumed to the same between synaptic buttons. For example, the quantity and kinetics of calbidin could probably be assumed to be about enough so that the diffusion constant of calcium remains ~ 50 μm^2/s.

Reference

Nardkari S, et al. 2010 Title: Spatial and Temporal Correlates of Vesicular Release at Hippocampal Synapses arXiv:1004.2009v1 [q-bio.NC].

Read Full Post »

Our models of synaptic action are still very incomplete and will benefit from more empirical research. In this direction, Antonova et al (here) cultured relatively young ( 2 – 3 week old) hippocampal neurons from one day old rats. They placed the neurons in either glutamate in Mg2+-free bath solution or control solution. They then tagged synaptic puncta with fluorescent markers and visualized them with a laser confocal scanning system coupled to an inverted microscope. Synaptic puncta were indentified as between 0.5 and 5 micrometers in diameter.

As expected, applying the glutamate led to evoked EPSCs in the cultured neurons. There was a corresponding increase in the average quantity of synaptic puncta:

doi/10.1371/journal.pone.0007690.g008

Note that inhibiting protein translation via anisomycin caused a downregulation in the quantity of synaptic puncta mainly at 60 and 180 mins, but also a slight decrease at 10 mins. Here is the model they use to explain these changes:

squares = more stable sites, green = filled sites, glu = glutamate neurotransmission, aniso = protein translation inhibitor; doi/10.1371/journal.pone.0007690.g004

The rapid (< 10 min) outgrowth is probably due to the activation of dormant appositions, as it is actin polymerization-dependent but not protein translation-dependent. On the other hand the gradual (3 hr) increase might be due to an increase in actual appositions between pre and post synaptic neurons that is protein-translation dependent. In their conclusion they draw an analogy between this process and Hebbian learning on the individual puncta level:

In addition to the increase in sites, some existing presynaptic puncta and structures were stabilized and stopped disassembling and reassembling. These two processes are reminiscent of two stages of synaptogenesis during late stage development: exuberant growth of new synapses followed by activity-dependent stabilization of some and pruning of others. Stabilization of existing puncta and structures during potentiation suggests a Hebb-type activity-dependent learning rule, in which puncta and structures that are present during the induction of potentiation (and therefore might contribute to it) are made more permanent.

Reference

Antonova I, Lu F-M, Zablow L, Udo H, Hawkins RD (2009) Rapid and Long-Lasting Increase in Sites for Synapse Assembly during Late-Phase Potentiation in Rat Hippocampal Neurons. PLoS ONE 4(11): e7690. doi:10.1371/journal.pone.0007690

Read Full Post »

« Newer Posts - Older Posts »