## Measuring causalilty in neural connections

December 12, 2010 by Andy

Distinguishing correlation from causation in signal activity can be very useful in describing how brain circuits work. The reasons are fairly obvious. In particular, it allows you to say whether a given connection is “upstream” or “downstream” in a particular network, and thus understand how the signal propagates.

Singh and Lesica have developed a new model for describing the relationship between pairs of neurons based on dependencies in their spike activity. They call their model “incremental mutual information.”

The algorithm is meant to work on time series data, and it follows these fairly intuitive steps:

1) At a given time point, measure the entropy of neuron X after conditioning on a) the past / future values of X, and b) the past / future values of neuron Y, relative to some defined delay of interest.

2) Measure the entropy of neuron X, conditioning on the same values as in (1), but also conditioning on the value of Y *at* the delay of interest.

3) Subtract (2) from (1) to find the reduction in entropy that occurs from considering the value of Y at the delay of interest.

4) Normalize (3) as a fraction of its maximum possible value.

Here is one such two-neuron model, in which* *Y drives X with a strong static connection and a delay of 4 discretized time units:

delta = delay between Y's action and X's response, n= time point of interest

They simulated 1,000,000+ data points with the above model, varying the delay time at which the statistical dependencies were calculated. As you can see below, the iterated mutual information model shows a much sharper peak at delay = 4 samples, where it should peak once you account for the added Gaussian noise:

x axis = various delay amounts, IMI = iterated mutual information, corr coeff. = cross correlation function calculation, the "standard" model that they compare

Although their approach shows improvements over the standard cross correlation function, it might have been nice to see comparisons to the other possible approaches the authors mention in the introduction, namely Granger causality and transfer entropy. The other downside to these model-free approaches is that they can’t easily be applied to large populations. Nevertheless, improvements in this field will be very important for modeling circuits given physiological data, and we’ll continue to track progress here.

**Reference**

Singh A, Lesica NA (2010) Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits. PLoS Comput Biol 6(12): e1001035. doi:10.1371/journal.pcbi.1001035, link.

### Like this:

Like Loading...

*Related*