Archiving the Hayworth-Miller 2019 debate about brain preservation

In 2019, Brain Preservation Foundation president Ken Hayworth was tweeting about brain preservation as a potential medical procedure. 

Hayworth was asking different scientists who have commented about the topic in the past to engage in a debate on twitter.  

I found the discussion between Hayworth and Ken Miller to be especially interesting because it gets into the details of the science and because it is so illustrative of how brain preservation with the goal of potential future revival is discussed. I wanted to document it here to summarize it and for posterity.

It’s hard to capture a non-linear twitter conversation. I did my best. For ease of reading, I’m splitting the conversation into a few different sections. 

0. Background: Hayworth’s 2010 article “Why brain preservation followed by mind uploading is a cure for death.”

Amy Harmon’s article about Kim Suozzi and the Brain Preservation Foundation:

Miller’s response article:

On June 9th 2019, Hayworth tweeted: “…Still waiting… waiting… for a single neuroscientist to engage publicly in #BrainPreservationDebate. To argue not that it might not work (duh) but to argue why they are so sure it won’t as to withhold the right to choose from terminal patients.”

On June 15th, Hayworth tweeted, “Many of us believe in the long-term success of neuroscience, all the way to mind uploading technology that will eliminate disease/aging. But are clear-minded that this will take centuries. Brain preservation is the ONLY viable bridge for us today. #BrainPreservationDebate”

1. Beginning: On June 16th, Hayworth tweeted to Miller (@kendmil): “@kendmil given your editorial in the NYT …  I was wondering if you would be willing to address this question about Aldehyde-Stabilized Cryopreservation’s ability to preserve long-term memories?”

Miller tweeted back: “Sorry but this is beyond my expertise. I don’t know exactly what aldehyde-stabilized cryopreservation does or does not preserve at the molecular level. But I also am doubtful that we know enough to know precisely what must be preserved molecularly to preserve long-term memories. 

But even assuming you could preserve, and *enumerate*, every molecular structure/location/state/interaction — I think the bigger question is what would it take to reconstruct a working brain or mind from that. As I argue in that NYT article, we are incredibly far from that.”

2. Timeline: Hayworth responded to the timeline part by agreeing: ‏”I completely agree. We are probably a century or more away from having the basic neuroscience understanding and technology to scan and simulate a preserved brain. But ASC provides that time and more. That is the argument being put forward for #BrainPreservationDebate”

3. Molecules: Hayworth responded to the molecules part by quoting a recent review: “Thanks for response. ASC preserves everything that glutaraldehyde preserves (connectivity, ultrastructure of synapses, ion channels, mRNA, etc.), it just follows this with inert cryopreservation so brain can be stored for millennia. Seems a wide enough net [to] encompass LTM theories”.

Miller: “What if any disruptions would be expected at the molecular level? Is the idea that it would freeze every molecule in place?? e.g., every CamKII molecule and its phosphorylation state? I also wonder if there could be dynamical interactions that get lost in freezing a snapshot…?”

Hayworth: “Glutaraldehyde (GA) crosslinks proteins in place within seconds and immobilizes other important classes of biomolecules (e.g. mRNA) by trapping them in the fixed matrix. Phosphorylation states appear to be preserved. Quoting from a recent review: “[N]umerous studies have shown that various post-translational modifications are preserved following GA fixation, including phosphorylation (Sasaki et al., 2015)…””

4. Synaptic weight stability: Hayworth also points out: “But you know that CamKII is not in a position to effect millisecond neuronal transmission directly. It is part of feedback loops ( …) that ultimately stabilize the true functional synaptic weight -dependent on receptor proteins like AMPA.”

To which Matt Krause (@prokraustinator) responds: “Is there really one “true” synaptic weight? I thought they constantly bounce around depending on what you’re recalling from the past, doing now, and planning for the future. If so, W alone isn’t enough; you need dW/dt too. I think this is what @kendmil means by dynamics.”‏

Hayworth: “That makes no sense from the perspective of storing long-term memories. Weights may change for other reasons (short-term memory) but something has to remain stable to encode long-term memories.”

Krause: “Why not? Stable doesn’t necessarily mean static. Even in computer memory (DRAM), the capacitor voltage is changing all the time (regularly and again when read out), and we’ve designed that to be stable, which isn’t obviously true for biological memories.”

As far as I can tell, Hayworth didn’t respond to this. 

5. Molecular correlations: Regarding CamKII feedback loops, Hayworth also argued: “These feedback loops contain a plethora of molecular and structural modifications that all correlate with the functional strength of a synapse. GA would have to erase ALL of this correlated information to prevent the possibility of future decoding.

In fact, there is plenty of evidence that functional synaptic weight is simply correlated with synapse size. … …  

More recent EM studies: …
“[T]o increase synaptic strength, a synapse must enlarge. The presynaptic terminal enlarges to accommodate more vesicles and active zones. The postsynaptic structure… enlarges to accommodate more receptors, scaffold and regulatory proteins.” []

Not sure what “dynamical interactions” you might be referring to that could be required for long-term memory storage? Surgical procedures like shut down neural activity without loss of LTM.

Bottom line: Neuroscience community has already developed really good methods to preserve brains specifically to study the molecular and structural changes involved in learning and LTM. [2015 ASC paper] … allows these to be cryostored indefinitely. #BrainPreservationDebate”

6. Synapses: On June 18th, Hayworth tweeted: “@kendmil Wondering if this adequately addressed your concerns? I am trying to open up a space for calm, rational dialog amongst neuroscientists regarding this. I thought your blanket in statement in the NYT saying brain preservation is impossible today… 

“It will almost certainly be a very long time before we can hope to preserve a brain in sufficient detail for sufficient time that some civilization much farther in the future… might have the technological capacity to “upload” that individual’s mind.” [ed: this is Hayworth quoting Miller’s article]

… was misleading and designed to shut down such rational conversation. I am hoping that you might throw me a bone and say you support further dialog within the neuroscience community now? #BrainPreservationDebate”

Miller responded: “At this point I can’t stand behind the statement that it will be a very long time before we can preserve a brain sufficiently, because I don’t feel like I know enough to be certain of that statement. I don’t think it changes any of the main thrust of my article, which was 1/

about how very far in the future is the prospect of being able to reconstruct a mind even from a perfectly preserved brain. I will add, though, you made arguments why you don’t need to perfectly know the status of all the molecules at each synapse, because many factors are 2/

correlated with synaptic strength. But there are two problems with that argument: first, even if all we had to know about synapses was their strength, we have no idea with what precision we would need to know that strength to reconstruct the mind of an individual. Second, we 3/

need to know much more than the strength. As I pointed out in the NYT article, we also need to know how the synapses will learn; and in order to be able to learn quickly while retaining memories for a long time, the synapse appears to need to be quite complex, so that its 4/

internal structure controls how plastic it is and this in turn, along with the synapse’s strength and dynamics, can be controlled by experience. See the work on the cascade model of Fusi and Abbott and more recent work of Benna and Fusi. They have speculated that it is the 5/

need for this complex, dynamic regulation of plasticity as well as of strength that is why the PSD is one of the most complex known biological machines, constructed out of varying numbers of copies of over 1000 different proteins. So it seems quite likely that if you do not 6/

know the full structure and relationships of all of these molecules at the PSD as well as those in the presynaptic terminal, that you would not be able to recreate the brain’s function — the brain would either learn very slowly or forget very quickly. Will your preservation preserve the states and relationships of all of these molecules at every synapse?

Hayworth responded: “Thank you for your thoughtful response. Let me address the three problems you mention:

Q1: “Even if all we had to know about synapses was their strength, we have no idea with what precision we would need to know that strength to reconstruct the mind of an individual.” 1/

A1: Reconstructing “the mind of an individual” to infinite precision is clearly impossible. Our brains are already noisy, chaotic systems. We are continually forgetting old memories and learning new ones and yet we consider our individuality to remain intact. 2/

People willingly undergo brain surgeries like hemispherectomies, to save and improve the quality of their life, with the understanding that some fraction of their personality and memories will change. ‘Success’ in mind uploading should be viewed from this same perspective. /3

A terminal patient choosing brain preservation with the hope of future revival via mind uploading is making the same type of rational judgement –faced with the alternative of oblivion I choose to undergo an uncertain surgical procedure that has some chance of restoring most of /4

the unique memories that I consider to define ‘me’ as an individual. Hopefully this makes clear that I am rejecting a ‘magical’ view of the self. An individual’s mind is computational and, just like with a laptop, an imperfect backup copy is better than complete erasure. /5

Now I believe there is some rough consensus on how perceptual, declarative, procedural, emotional, and sensorimotor memories are stored in the brain and how they interact to give rise to mind (e.g. …). /6

Such learning and memory is stored as changes to synapses and possibly intrinsic excitability of neurons in recurrent networks which change the attractor dynamics of these networks. /7

Generally, representations in the mind are particular firing patterns of neurons (attractor states) and the process of thought is guided by the attractor dynamics defined by the sum total of the memories laid down over our lifetime. /8

The goal of mind uploading then is to approximate, in a computer simulation of the preserved brain, the attractor dynamics that were present in the original biological brain.  /9

We know these attractor dynamics must be relatively robust to noise (e.g. quantal release statistics) and damage (concussion, surgery). /10

These noise considerations imply tremendous redundancy in the encoding of learning and memory, implying that the attractors should be somewhat robust to noise in our determination of the synaptic weight matrix itself. /11

If we wanted to, we could design experiments specifically designed to determine the noise tolerance of brain attractor dynamics to synaptic changes. /12

Measuring the effects of neurotransmitter blockers and optogenetic perturbations on attractor dynamics would be one way of doing so. For example: … and …  /13

We could also ask whether the signatures of learning and memory can be gleaned from a small sampling of connectivity and synaptic sizes. The answer is yes, again suggesting significant redundancy:… and … /14

Finally, there is some recent evidence that synapse strength is quite tightly correlated with ultrastructural features: … /15

Q2: “We also need to know how the synapses will learn; and in order to be able to learn quickly while retaining memories for a long time, the synapse appears to need to be quite complex… work of Benna and Fusi…” /16

A2: There are two points here. First, that synapses are quite complex and that this complexity needs to be modeled accurately in a mind upload or learning will not work. I agree completely, but such complexity can be determined by side experiments on other brains. /17

Second, there may be ‘hidden variables’ besides the synaptic strength that encode information in every individual synapse. This is what Benna and Fusi’s … cascade model says. /18

Their model does not specify how these ‘hidden variables’ are stored but from the things that they do suggest I believe that Aldehyde-Stabilized Cryopreservation would indeed cover that range. /19

After all, we are talking about protein cascades whose dynamics are already being imaged at the synaptic level: … /20

That said, even if some of the information stored in such hidden variables was lost, the Benna and Fusi simulations imply that this would not significantly disrupt the attractors stored (the hidden variables simply helped later memories not overwrite earlier ones). /21

Q3: “Will your preservation preserve the states and relationships of all of these molecules at every synapse?” A3: A mind upload would model mathematically this complexity to implement the learning rules, but such interactions should be the same across different brains. /22

A brain preservation based on glutaraldehyde fixation should preserve the majority of proteins and their states at each individual synapse –sufficient to determine ‘hidden variables’ beyond synaptic strength if necessary. /23

Specific stains (e.g. …) could be used to tag key proteins to create a ‘molecularly annotated connectome’ that would reveal such hidden variables along with ultrastructure.   /24

I want to thank you for the fascinating discussion and great paper references. I hope you will agree that discussing what would be required for brain preservation and mind uploading should not be a taboo topic. /25

In contrast, it is a topic that can be approached with the current tools of experimental and theoretical neuroscience. We won’t be able to get a definitive answer anytime soon, but we should be able to identify key open questions. /end”

7. Optimism. Miller responds: “You make a reasonable point about brain function being robust to noise given synaptic failures and quantal variability. But it is designed to function w/ that noise. It remains unknown, tho, how much noise in specifying synaptic weights and short-term synaptic dynamics 1/

can be tolerated. I’d also say the idea that representations are in general attractors is very far from clear — in a very few cases there is evidence of representational attractors. But that’s not really critical to your argument. Finally, re Benna & Fusi you argue that 2/

losing hidden variables “would not significantly disrupt the attractors stored (the hidden variables simply helped later memories not overwrite earlier ones).” But that’s the point — if later memories overwrite earlier ones then you are not you, your memories disappear 3/

very quickly. If we can’t start from the snapshot of the person’s last state and proceed forward with normal learning and forgetting — if either they can’t learn new things or rapidly forget old ones — then the living functioning learning remembering person is gone. 4/

More generally, I would just say that you make reasonable arguments but to me you appear extremely extremely optimistic. I take very seriously the depth of our ignorance as to how the brain works overall, how we are able to learn new things quickly w/o quickly forgetting old 5/

things, how the many different forms of memories work and values are computed and decisions made and unified perception achieved and unified actions taken and mood and motivation controlled and on and on … the level of ignorance between our taking specification of a bunch 6/

of molecules and connections and neurons and glia and their states and turning that into a functioning living learning motivated decision-making perceiving mind — that level of ignorance I find astronomical and humbling, and my own gut guess — and it can’t be any more than 7/

that — is that we’re talking time scales of 1000 years, or more, rather than 100. And given our extreme ignorance and given that you know you’re going to lose a fair amount of detailed molecular information, though you don’t seem very clear on exactly what, but given that 8/

it seems to me extremely optimistic to think that you will not lose any molecular or other information that would be critical to reconstructing a functioning sense of self. You’re entitled to your optimism, but that is how it looks to me.

And also, a reminder, we’re not just discussing knowing enuf to make a functioning mind, already every bit as daunting as I just described; but to make Judy’s mind as opposed to Linda’s or Sam’s – to capture the individual’s self, which is a whole other level of complexity.”

Hayworth responds: “Responding to … Optimistic –guilty as charged. But this is sounding like a discussion between two physicists in 1900 debating landing on the moon. Both agree equations allow it, but one insisting the other doesn’t realize how very difficult it will be. \1

You make a reasonable point about brain function being robust to noise given synaptic failures and quantal variability. But it is designed to function w/ that noise. It remains unknown, tho, how much noise in specifying synaptic weights and short-term synaptic dynamics 1/

I hear you, it will be insanely difficult. May take 200 years to get first, 1000 to make routine. Aldehyde-Stabilized Cryopreservation can handle those spans as long as humanity decides that each generation will care for all previous in storage till we can all awake together. \2

I agree it might be impossible given the preservation techniques we have today –won’t know without more research. But I have faith humanity will eventually succeed. First in developing a sufficient preservation procedure, and much, much later in developing a revival procedure. \3

My ‘faith’ is based on careful reading of the literature regarding the synaptic encoding of memory… on careful reading of the aldehyde fixation literature… on personal EM of ASC preserved pig brains… and based on years of developing automated connectome mapping instruments. \4

You “take very seriously the depth of our ignorance as to how the brain works overall”. What about all the progress we have made? For example Lisman’s 2015 review of the field: … \5

And I wonder how you view the deep learning revolution? Is this not evidence that neuroscience is on the right track: … Networks now learn object recognition, language translation, driving, chess and go, etc. at human levels. \6 … offers evidence that processing in these artificial networks is similar to processing in the biological brain. … shows that biologically plausible rules can approximate backprop. \7

Connectome scanning is advancing so rapidly that the NIH is now endorsing a whole mouse EM connectome with human as a long-term goal: … Ion milling combined with multibeam SEMs is possible route: … \8

And synapse-level molecular annotation is becoming routine: … , … \9

Looking at such rapid progress I find it hard to share your pessimism that it will take 1000+ more years for neuroscience to be successful and that today we neuroscientists wallow in “extreme ignorance”. But you are entitled to your opinion. \10

But the real questions are: Can a terminal patient who, like myself, knows enough about neuroscience to understand the speculative and uncertain nature of the endeavor, give informed consent? \11

Should I have the right to choose brain preservation over certain oblivion, or should that right be withheld from me because someone like you believes ‘it might not work’? \12

Should I have the right to a well-researched, high-quality, regulated preservation procedure, performed pre-mortem in hospital, based on the best techniques that neuroscientists have developed to preserve the molecular and structural correlates of memories (like ASC)? \13

Or should the scientific and medical community continue to turn its back on such research, leaving people like me no option but unregulated, ‘back-alley’, post-mortem cryonics? -the only option people like me have today. \14

Public dismisses brain preservation because they dismiss the core of neuroscience –instead believing that the mind is magic soul-stuff. Opinion pieces like your NYT are playing to the public’s incredulity not of mind uploading, but of the principles of neuroscience itself. \15

But within the neuroscience community I suspect your piece fell a bit flat, especially for young neuroscientists who believe theirs might be the generation to finally understand the brain and who are working hard developing new tools and pushing new computational models.  \16

I have met many neuroscientists who chose this field because it addresses the deepest puzzle of them all –the computational mind. Who think solving this puzzle will lead humanity to overcome biological limitations through mind uploading. \17

But these neuroscientist do not dare voice this enthusiasm out loud. Why? Because they are afraid people like you will ridicule them in the press for the sin of ‘taking neuroscience too seriously’. \18

Lest you think I am exaggerating, I assure you I have had many private conversations with neuroscience colleagues who agree with me but explain that saying so publicly will hurt their career. \19

And the brilliant young developer of ASC got ripped to shreds in the press by a mob of ‘magic soul-stuff’ believers, and the neuroscientist who were called on to defend him stabbed him in the back instead. … \20

Bottom line, this is what I am simply asking: Support research and debate within the neuroscience community regarding brain preservation. Do not suppress it through ridicule. \21

And if called by the press to give an expert opinion, don’t play to the mob of ‘magic soul-stuff’ believers who relish every time a neuroscientist says ‘we have learned nothing”. Instead support your field the way biologists did in the face of evolution deniers. \end”

8. Ethicists: Miller responds (second link): “There is the scylla and charybdis of, on the one hand, giving people false hope, having them spend their time and money pursuing an unachievable (at least currently) immortality; on the other, denying people a choice, to choose to be killed before (but presumably close to) 1/

their natural time of death to allow optimal preservation in pursuit of this hope. At this time I believe it is a false hope, and I choose to explain why I believe that so that others can be informed. They will also hear your perspective. As for choice, I believe that the 2/

terminally ill who are suffering should be allowed to take their own life at a time of their own choosing. You could stretch that to allowing the terminally ill to take their own life for a perfusion procedure, I wouldn’t argue strongly against that. But to ask the medical 3/

establishment, hospitals and doctors, to offer this, to sell this as a service, when it is certainly of unknown efficacy and I think of very dubious efficacy — there are a lot of ethical reasons why that shouldn’t happen. Tho, if perfusion, like other euthenasia, were legal 4/

for the terminally ill then presumably doctors could choose to participate. But even that is very dicey ethically, again because of all the issues around offering false hope and methods of, at the very best, unknown efficacy. I’ll leave sorting that out to the ethicists. 5/

For your other arguments, I haven’t ridiculed anyone, and I think neuroscientists should feel free to express informed opinions on these issues, but they should also welcome debate including different views like mine. Expressing my views is not playing to the mob or 6/

suppressing a field or ridiculing anything. It is precisely the debate you say that you want. I haven’t expressed any opinion on basic animal research on brain preservation. My concern is offering what I believe is false hope to people facing their imminent mortality. You are 7/

free to argue that the hope is not false. Re the other arguments you make, of course we’ve made tremendous progress, and continue to. But that doesn’t much change how fundamentally ignorant we are of how the damn thing works, of what it would take to build a working one. 8/

Progress is rapid but the project is vast. Progress in neural networks is exciting and certainly is suggestive that significant chunks of our intelligence can be understood from distributed, non-symbolic computation. But ask any leader in the field, we are extremely far from 9/

AGI, artificial general intelligence. Though NN’s currently provide our best models of some sensory systems, they are a long ways from the real sensory systems that incorporate top-down as well as bottom-up processing and that learn quickly, constantly and from few examples. 10/

NN’s are exciting but there’s nothing in current NN’s that promises quick progress in understanding the brain.”

Hayworth responds: “Agree, this is an issue for medical ethicists. First step is locking down what is known and unknown on the neuroscience side, then address ethics. Euthanasia rules should apply and all people should have access with no cost. A single philanthropist could ensure this for 1000s. 

Fantastic. I hope my reluctant colleagues will take heart that open, civil debate is possible while still pursuing a career.

Agree, glass is half empty/full. But NN are suggestive that complex learned functionality can be encoded in a ‘connectome’, and these systems can be used to explore what fidelity of weights etc. would be necessary for decoding.”

9. Cryonics: Roko Mijic also asks Miller: “”At this time I believe it is a false hope”

What does this really mean? 

I don’t like imprecise statements in these kind of debates, because it leaves room for later weaseling by exploiting the vagueness. 

Are you genuinely 99+% confident that cryo is impossible? If so say so

Otherwise I think there is a risk that a lot of people hear things like “false hope” and understand it to mean that cryo is totally impossible and unscientific, akin to homeopathy.

But then if it is shown to work one day, “false hope” will be “reinterpreted” to mean that we couldn’t be sure how it would work; a lot of people will have been erased from existence on the basis of some weasel words.

Miller responds (second link): “I’m 99+% confident that no one being cryo’d today will ever be revived. But what good does my saying that do you? @KennethHayworth might say the opposite. So you need to consider arguments, not declarations of conf. For one set of args, see my op-ed … 1/

More generally, I would say: we have no idea what information would be needed to reconstruct a functioning mind from a molecular/cellular brain snapshot. We do know that the problem involves layers and layers of cellular/molecular function that cannot just be reduced to, say, 2/

synaptic strengths and ion channel densities. We also seem unclear on exactly what molecular info is preserved by best cryo techniques. What are odds that that uncertain preservation just happens to capture all of the unknown info needed?

Mijic: “What are odds that that uncertain preservation just happens to capture all of the unknown info needed?”


Why would you deviate strongly from this in either direction if you don’t know what is needed or what is preserved?”

Miller: “Call it Murphy’s Law, whatever can go wrong, will go wrong (w/ p approaching 1, not .5). Or, if you don’t know what you’re doing, it’s p->1 that you won’t get everything right. Or say they’re N factors you have to get, p=.5 for each, so p=1/2^N of getting them all.”

Mijic: “But what if there is really only one thing you need to know, and N different structures each record that thing, so if you preserve structures at random then the probability of success is 1-2^(-n) 

This is the “thinking like a scientist/thinking like a cryptographer” take on it”

Parrish: “This is a good nutshell, and may explain why computer scientists are disproportionately bullish on cryonics compared to other scientists. We expect redundancy to be hard to defeat, and only 1 copy to be needed. Information is leaky….”

Hayworth responds: “Let me clarify: The cryonics community hates me because I challenged them to publish evidence of connectome preservation and they failed. I do not disagree with @kendmil regarding the slim chance of cryonics today. But I think we neuroscientists know how to preserve brains right”

Mijic: “hang on I’m a bit out of the loop: didn’t someone actually win the prize? …”

Luke Parrish: “Look closely, that was ASC — chemically fixed, *then* vitrified. Not compatible with cryonics as-practiced.”

Hayworth also responds: “Q: Do we know enough about the neuromuscular junction to ‘upload’ one? A1: Oh no, it has almost unfathomable molecular complexity. Thousands of journal articles have only begun to scratch the surface.
A2: Of course, it is just a switch. (I support this answer)”

10. Information: Mijic: “4/ Thus I feel that the article doesn’t really engage with the strongest argument for cryonics. Almost everything you talk about is going to be well understood by revival time so it’s irrelevant how complicated it is. What matters is correlation and information.”

Miller: “What I’m talking about in the article is not just general brain function, but the particularities of one brain vs another — not only how strong is each synapse and what are its synaptic dynamics, but what complex state are they in that controls how they will evolve under 1/

further experience. The individual’s brain has to learn not only memories but the structures that keep them stable while also allowing new learning. Similarly, what controls how the excitability of a given cell, or dendrite, will evolve under experience? In other words, the 2/

individual’s brain carries information not only about the strength/excitability of each element but about their mutability, each individually learned. If you can’t reconstruct all of that the brain isn’t going to work correctly. 3/

Your comment about 1 million years and how far we’ve come in 10,000 — we’ve come enormously far in understanding the physical world but we’re a lot less advanced at complex systems, and the brain is likely the most complex of all by far. Do you think we will figure out 4/

*everything* in 800 or 1000 or 10,000 or 100,000 years, or will there always be new scientific frontiers to understand. If the latter — how far down that road is understanding the brain. You just have to understand the enormous complexity, down to cellular/molecular 5/

operations controlling mutability up through the enormous complexity of the neurons and synapses and their short-term dynamics and connectivity and anatomy and on and on. It is easy to underestimate just how deep this complexity is. If you don’t underestimate it, then you come 6/

to believe that the chances of our capturing everything we would need to know to reconstruct an individual mind given our current ignorance are virtually nil.”

Hayworth: “Sorry, let’s stay on the science side. Q: Given a 10nm res EM of mouse retina do you think we could determine whether a given retinal ganglion cell is on center vs off center? …Trying to determine where you think complexity will make impossible.

Miller: “My biggest concern is the amount of information stored intracellularly at the molecular level controlling the dynamics/plasticity of the synapses, dendrites and neurons. So long as you are unsure of how much or which molecular information you can preserve, I think p->1 that 1/

you’ll be missing something essential. That’s as to your preservation method. As to how long it will take until we could take a perfectly preserved brain and make a mind out of it — which requires the ability to reconstruct all the informative bits of the preserved brain as 2/

well as to know how to dynamically assemble them into the individual’s working mind — well, we will get there someday, but I think it is a very very very long time — that it is much deeper and harder than is easy to imagine.”

Hayworth: “We are not really “unsure of how much or which molecular information you can preserve”. Glutaraldehyde preserves proteins, their positions, and their phosphorylation states. This includes ion channels and receptors. It preserves a range of other molecules (e.g. mRNA) in matrix.

It covers all the components suggested to be of importance in just about the full range of existing theoretical models.”

Miller: “Maybe the relevant question is, what wouldn’t it preserve?”

Hayworth: “Changes to protein tertiary structure. Loss of extracellular space. Loss of small ions and molecules. Fixation artifacts arising from first few seconds of living cells reacting to fix. All of this is in the literature. For example: …

Miller: “I don’t understand. Your reference compares chemical fixation to rapid freezing. Is either of these your preferred glutaraldehyde method? The paper doesn’t say anything that I can see about preservation at the molecular level. ??”

Hayworth: “High pressure freezing (HPF) is only possible on tiny pieces of tissue (<1mm) but is considered as close to the living biology as you can study. The paper compares glutaraldehyde fixation to HPF to quantify the artifacts in glutaraldehyde fixation.

Aldehyde-Stabilized Cryopreservation begins with glutaraldehyde perfusion fixation so it has all of its artifacts. Follows with a slow perfusion of inert cryoprotectant to allow for long-term storage. Result is basically the same as glutaraldehyde alone but can last indefinitely”.

11. Predicting physiology: Miller: “Another problem with the spine anatomy->physiology hypothesis is that a spine doesn’t have just one “amplitude of its synaptic potential” — it’s dynamic, depressing, facilitating depending on the spike history. The info controlling that is not in the anatomy.”

Hayworth: “There is no evidence that any of this dynamic behavior encodes long-term memory. Probably all are reset during a concussion for example. You make good points about complexity but if they are not related to long-term memory encoding then they are irrelevant.”

Miller: “The dynamic synaptic behavior in response to trains of spikes absolutely will be involved in every aspect of brain function, because every percept, action, decision, memory storage or retrieval or use, act of learning involves sequences of spikes.”

Hayworth: “Again, you can shut down spiking with cold and the person survives with long-term memories …. If we were not discussing uploading but just long-term memory encoding at a conference you would not be bringing up these examples.”

Miller: “The memory is read out and used by patterns, trains, of spikes. The synapses will have changing strengths depending on their spike history. Take those dynamics away and the read out, use, anything the brain does will be changed, probably by quite a lot. The shutdown and 1/

reactivate example doesn’t speak to this. It presumably (?) says you can reset all your synapses to their “I haven’t seen a spike for a long time” state (or maybe freeze them in some other state) and it still works, but the pt is it works by using its synapses with their dynamics”.

12. More molecules. Konrad Kording (@KordingLab) tweets: “I believe that the exact configuration of proteins matter. Time (and in particular anything that disturbs protein configuration) will this delete.

Hayworth: “The exact quantum state as well? Where are you getting this? We have a literature regarding synaptic function. Which proteins are you talking about in particular? What evidence for hypersensitivity to configuration with no correlated information like PSD size? If you give a precise model I can then address the question of whether glutaraldehyde fixation would preserve it.”

Kording: “As long as psd and synapse size do not well predict epsps/ipsps my model is simply “other molecular stuff”. Get me high quality predictions, ideally with in-vivo conditions and I will revoke my objection and become a fan.”

Hayworth: “First a reminder that glutaraldehyde fixation preserves ion channel and receptor proteins in place. You don’t think EPSCs/IPSCs can be reliably predicted based on these? Do you have a required precision based on some model (e.g. attractor memory)? /1

But addressing the PSD and synapse size question, here are some references:
Glutamate uncaging while recording EPSC (slice and in vivo): …

Kording: “Nice paper and r2=.8 is good. Wonder how general it is and how it generalizes across situations. But good evidence.”

Miller: “Notice that it’s normalized per dendrite: x-axis, spine size relative to largest spine on the dendrite; y-axis, current relative to largest current of those spines. If all the differences between dendrites came from diffs in space constants to the soma, then in principle could 1/

calculate absolute currents given full knowledge of the neural anatomy and channel distribution, but that’s *if* — not proven. But the other pt is syn strength is both presyn and postsyn. Glut uncaging measures postsyn component, ie. all post glut receptors are activated. 2/

But presyn component involves how many vesicles are released, and that is (at least one thing that) changes (in the mean) with spike history in synaptic depression and facilitation. Need to know presyn behavior also to know synapse behavior.”

Hayworth: “Reviews … that I have read suggest that presynaptic structural changes (e.g. size of varicosity, # vesicles) correlate with postsynaptic ones. For example: …”

Miller: “Pt is, presyn release is dynamic. Synaptic depression can be due to vesicle depletion; facilitation can be due to increased p(release) due to increased [Ca++] in the presyn terminal. Different synapses have diff dynamic properties, these can greatly change synaptic function. 1/

See classic works of Tsodyks and of Abbott on many computational effects of depression/facilitation. Also, Markram in ’97 or so showed plasticity could be presynaptic, e.g. increasing p(release) so that first PSP was stronger but facilitation changed to depression”

Hayworth: “Aren’t these for models of working memory as opposed to the types of long-term memory that would be involved in perception, procedural skills, declarative memories, etc. Long-term is what is important for identity preservation. Am I misunderstanding?”

Miller: “The goal is to reconstruct a functioning brain/mind, not just write down a list of memories, right? If your reconstruction scrambles the synaptic dynamics, then the reconstructed brain is going to have very diff activity patterns and compute very diff things than the orig brain.”

Andrew Hires: A coma, medically-induced or otherwise, must scramble synaptic and circuit dynamics. So does a psychedelic experience. Yet, people’s minds are recognizably the same after.

IMHO, the network graph + general synaptic rules might be sufficiently self reinforcing to recover a mind.”

Miller: “I’m not saying you need to maintain your exact dynamic state. You can reset it. I’m saying the dynamic *operations*, starting from wherever you start from, are a key part of the given brain’s function. Scramble that, and, if it works at all, it’s likely a very different brain.”

Hires: “I’d bet there is sufficient information in EM-resolvable synaptic ultrastructure to predict channel composition & synaptic dynamics to 1st order, given sufficient training data and a LOT more work sampling the synaptic properties of region to region projections. Open question.

With fast enough fixation, you could get reserve pool, readily releasable pool, proportion of docked, fused and recycling vesicles. Surely this has predictive power to synaptic depression and facilitation rates, particularly if same projection has been characterized in slice.”

Miller: “Q as always is how much predictive power can you get and how much do you need? But even if you say you can get syn strengths and dynamics, there’s a host of other issues — e.g., all the synaptic and cellular molecular factors controlling the degree of plasticity of each 1/

synapse, dendrite, neuron. As in models of Fusi/Abbott and Benna/Fusi, synapses probably learn their degree of plasticity, and this synapse-by-synapse learning may be critical to ability to learn new things w/o fast forgetting of old. Plasticity of excitability also likely 2/

involves learning. To what extent do you need to take apart the molecular structure of every synapse, dendrite, cell to recreate a mind? And probably many pieces of the story we haven’t even glimpsed yet (e.g., glymphatic system; new Ahrens zebrafish result on glial coding; …)”

Hires: “I agree much that is required to know we have not glimpsed yet. Q is could we bootstrap future discoveries to infer the needed data with sufficient precision from an optimally preserved brain. It’s a fascinating question and a fun debate for (probably) the rest of our lives.”

13. Individual vs genericMijic: “I think you’re wrong here @kendmil The goal of cryonics is only to preserve your memories and personality. 

Working out exactly how a functional mind works can be done in “side experiments” in the future.

Including side experiments using your own DNA to create a virtual clone of yourself and study its brain, which would almost certainly tell you all of this stuff about how easily new info can be learned, or really anything about the brain other than its memory

if the future has all your memories they can take your DNA, create a virtual copy of you and expose it to the same events and then go measure these dynamic response properties if it’s necessary.

@kendmil I think we need to keep in mind very clearly what the goal of cryonics is: to preserve the unique information that is lost in death. Other information like general facts about brains or anything else that is basically a function of your DNA doesn’t need to be saved

A consequence of this is that any difference which doesn’t differ between identical twins doesn’t need saving. A lot of the things you’re complaining about (e.g. a brain that rapidly forgets things) fail this test.”

Miller: “You’re assuming that your “memory and personality” are staying and everything else is “general brain function”, not individual-specific. When I talk about learning without rapid forgetting, I’m talking about what is probably an individual-specific set of synaptic states 

an individual set of synaptic states determine their individual mutabilities learned thru the same processes by which you learn their synaptic strengths. Similar when I talk about dynamics I mean the at least partially learned 2/

patterns of dynamics of individual synapses. There is a lot more than synaptic strengths that defines an individual.”

Hayworth: “In support of @kendmil , he has very clearly stated a legitimate concern: There may be states in synapses that are hidden to EM but important for long-term memory. It would be better if he could say what these states are so we can see if glut preserves them but point taken.”

Miller: “So much is unknown, so it is impossible to point to what these states are. But expt has shown that synaptic dynamics change under learning (e.g. Markram papers, late ’90’s) and theory shows there is a problem combining quick learning with slow forgetting, for which one 1/

theoretical solution is complex internal synaptic states that control mutability.”

Mijic: “So, these states have no correlation whatsoever with anything that is preserved, they are robust over 50+ years of normal life, concussion, brain death, but they are destroyed in an information-theoretic sense by either the aldehyde or the cold or the cryoprotectant?”

Miller: “We don’t know how these states are coded so we can’t know what is or isn’t preserved. But given how much we don’t know, it is very hard to feel confident that current methods preserve all necessary information.”

14. DynamicsMiller: “I’m not saying you need to maintain your exact dynamic state. You can reset it. I’m saying the dynamic *operations*, starting from wherever you start from, are a key part of the given brain’s function. Scramble that, and, if it works at all, it’s likely a very different brain.

Of course, they are always changing by learning, within cell-type constraints. But, just like synaptic strengths, the strengths of depression and facilitation presumably have some learning-produced structure. Scrambling either strengths or dynamics likely to greatly alter function”.

Parrish: “Even dynamic things have to be encoded in physical reality somehow. That seems to imply molecular structures, my analogy e.g DNA while it is being replicated. So if glutaraldehyde can fix something like that mid step, good chance it fixes dynamic brain operations too…”

Miller: “Of course it’s physical. The question is how much of the necessary information is preserved and reconstructable. As well as, of course, if/when everything needed is preserved,how many eons will it take us to learn to reconstruct all the necessary info and create a mind out of it.”

Parrish: “Perhaps a relevant data point would be what kinds of things are known not to be preserved by glutaraldehyde. It seems broad-acting on the level of visible detail seen through EM, but are there many important classes of molecule that are not acted upon?”

Hayworth: “Recent review:  I love that we have gone from “cryonics can’t be trusted because it can’t demonstrate it preserves what every neuroscientist knows is crucial (synaptic connectivity)” to “ASC can’t be because the neuroscience textbooks may be totally wrong”

I think more neuroscientists should just embrace the fact that people are taking their models seriously. Our minds are just a product of neuronal computations defined by connectivity, ion channels, etc. And neuroscience has figured out how to preserve these indefinitely. Go team!

And team #neuroscience is just getting started. In the coming decades we will figure out how to map a glutaraldehyde-preserved mouse brain at the synaptic level and how to annotate this with whatever molecular info is needed to decode moderately complex learning and memories.

Many decades later team #neuroscience will figure out how to simulate a mouse brain from such a molecularly-annotated connectome. And perhaps by the beginning of next century we will be ready to upload the first human in a $100 billion Apollo-scale project.

That project to “put the first person in cyberspace and return them safely to consciousness” will answer all of our philosophical questions about mind uploading. A century later when uploading becomes routine our ancestors will ask one question…

Why didn’t our ancestors in the early 21st century adopt brain preservation? And they will arrive at one answer: We were not killed by a lack of knowledge or technology, we were killed by our bad philosophy: …”

15. No copy problem: Michael Hendricks: “You are not the “same” if you can simultaneously exist as different entities…that is a physical impossibility. And there is nothing magically different about whether the sim exists before or after you’re dead.”

Hayworth: “I have to disagree with that. I can have multiple drafts of a program on several computers, some running simultaneously. They are all the ‘same’ when compared to starting over from scratch. I see no reason that same argument does not apply to us.

Q: If we assumed that the philosophical copy problem really did forbid ‘survival by backup copy’, what would this mean for a race of sentient robots where, unlike biology, copying programs and data are trivially easy? Doesn’t the copy problem imply sentient robots cannot exist?”

Kording: “No two robots are *identical*. I have no idea why that would limit their sentience.”

Hayworth: “No one cares about ‘identical’. If I make a backup copy of C3PO’s memories before a mission, he gets destroyed, and then the backup is put into another robot body then 99.9% of what made C3P0 unique is still here to interact with. Same with us right?”

Kording: “I am totally with you. I just had to agree that @MHendr1cks was right about “identical not possible””

Miller: “I agree that there is no copy problem. In the science fic world where we could replicate your brain, it wakes up as you the same way you wake up as you in the morning — the you that went to sleep doesn’t exist anymore, something else is waking up with an experience of being 1/

continuous with the you that went to sleep. That wouldn’t be different if it’s a brain replica in 1000 years or you in the morning. Of course, each copy then goes off to have its own experience and individuates, like identical twins. But for now I think this is sci fic.”

16: Simulations not dispositiveHayworth: “The ball is no longer with cryonicists or skeptics, it is now in the neuroscientists’ court. Do we believe our research ( …) and believe our field will eventually be successful ( …) or not. That is the story that is unfolding today.”

Kording: “Hey. I don’t see why my challenge is not valid. Show in a small system that you can simulate the ephys based on connectomics. I do not argue that cryonics is per principle impossible. I argue that central and easily testable assumptions have not been tested.”

Hayworth: “Please say ASC or something more precise. Saying cryonics is asking for misunderstanding.

People are dying today and want to take chance on ASC. Folks like Nectome are gearing up to answer that demand. Now is the time for the neuroscience community to set clear goal line they must cross.

Your ‘ephys based on connectomics’ challenge may be best, but needs clear success criteria. Must be answering a real concern not just a feel good demo. Again, people are dying now. The ‘preserves synaptic connectivity’ challenge was met and now the goal posts have moved.”

Kording: “And the critique that we know too little to have any confidence that EM structure or even the joint set of everything that is preserved with cryonics is sufficient stands and seems scientifically tenable.”

Miller: “Yes. I don’t think whether or not we can predict ephys from connectomics today is the right test. We can’t, but someday we’ll be able to. To me the real concern is how very little we know about how the brain works, and the likelihood that critical things currently unknown 1/

won’t be preserved. Unfortunately there’s no challenge or test for that, since it’s unknown unknowns. Tho we can point to some likely issues like complex synapse-specific internal states. There are other issues I can see, like how you get info in & out of simulated brain 2/

if you don’t preserve sensory input structures (retina, cochlea) to know which input neurons should carry which info and spinal cord and ganglia to know same for output neurons, but advocates can probably address that; or how many eons before we can read out preserved info 3/

and successfully simulate the operating, learning brain from it and the likelihood that civilization and the preserved brains both last that long, but if you’re optimistic enough that won’t look like a fatal problem. The main issue I think is how much is unknown.”

Kording: “but I think we can agree that noone can remember their own cryopreservation – there is too little time for proteins to be made or for structure to change.”

Hires: “That’s a feature not a bug”

Jprwg: “A question, which I hope makes sense: to what extent does the info cryonics can plausibly capture of a brain’s structure encode how the brain works, vs just encoding that individual? Do we at all get working brains ‘for free’ or will their full design need be explicitly modeled?”

Miller: “Definitely not for free. We need to understand how all the pieces make a dynamically working brain.”

Jprwg: “Thank you. To clarify: we could analogise general brain workings vs individual identity to a piece of software that can load & run different users’ data, eg a word processor. Are you saying then that cryonics gives us only the user data, not any of the software functionality too?”

Miller: “No, software/hardware separation is a bad analogy for the brain. To save enough to recreate an individual, you would presumably have to save everything that makes a working brain”

17. Conclusion: Hayworth: “Seems this thread has drifted from hardcore neuro…less productive. @kendmil would you agree that … demonstrates that function-from-connectome is possible at least to some level of precision?”

Miller: “It’s not in dispute that function derives from structure. Q is, how much and what structure do you need to re-create a dynamically working, learning, particular individual’s mind, and how long to develop the knowledge to read out info and re-create a mind.”

Hayworth: “Yes, and papers like … address this question. If learning was stored as subtle hidden synaptic molecules, or was  incomprehensibly complex (as you have been alluding to) then they should have found nothing using primitive rabies virus tracing right?

“Q: what structure do you need” -paper suggests connectome is sufficient for visual RFs. ASC can preserve this (and much more), EM can image and computers can simulate this at small scale today.

I am trying to zero in on core of your objection so I can either address it directly (as I have tried with refs) or succumb to its irrefutability. Each element of your statement “re-create a dynamically working, learning, particular individual’s mind” I am trying to address. \1

The neural models we have today are “dynamical and learn” while based solely on things ASC preserves (morphology, connectivity, synaptic ultrastructure, receptors, ion channels). You countered there may be hidden variables that would prevent function prediction based on these. \2

I countered with studies that showed particular functions (e.g. …) could be determined based on a subset of these. Benna & Fusi is a theoretical ‘what if hidden variables?’ which can be refuted by evidence correct? \3

A “particular individual’s mind” is unique because of learning-related changes (Unless you are implying the philosophical copy problem?). I provided refs (e.g. El-Boustani 2018) showing such learning is encoded in interpretable structural changes preserved by ASC. \4

“how long to develop the knowledge” is being addressed by showing how far we have come already (e.g. understanding the synaptic basis of memory sufficiently to create false ones … , and to label and erase the synapses encoding one …) \5

“how long to…read out and re-create a mind” is addressed by advances in connectomics and molecular imaging that are in principle compatible with ASC preservation. Given our progress in all of these areas does my estimate of ‘one to several centuries’ really seem outlandish? \6

Q: Is evidence of RFs, auditory and contextual fear memories, etc. irrelevant to your objections because you believe that consciousness is built of different circuit elements and molecules than those studied by neuroscientists today? If so I can address that as well. \7

Miller (otherlinks): “Ken, you are missing something basic to what I am saying, so let me try again. You are focused on long-term memories. But what you claim to want is to reconstruct a mind/brain. A mind/brain is a lot more than a fixed set of long-term memories. It has to operate in the world, 1/

creating motivations, making decisions, learning from ongoing experiences while retaining and utilizing and modifying (reconsolidation) prior memories. I don’t have a problem w/ idea that much of memory is in synapses, strengths & dynamical properties. But a mind that proceeds 2/

with those to new experiences and continual learning has to know how to modify synapses to learn from new experiences without losing old memories. We know theoretically there is a big problem with achieving that. One set of solutions are those of Fusi, where each synapse has 3/

learned a complex internal state that among other things codes that synapse’s mutability. If that were true, and you lost that internal state info, either the reconstructed brain could learn little new or it would quickly forget the old. More generally, a synapse is one of 4/

the two most complex molecular machines known — at least, a mammalian synapse — and those 1000+ different proteins, multiple copies of each, must be storing a lot more info than a strength. Even if it were true that the memories are largely in the strengths, the *function* 5/

of the synapse, the way it supports both new learning and memory maintenance/updating in the experiencing organism, must be far more complex than a scalar strength. And the functions involved in a brain operating itself and taking actions in interaction w/ experience – we have 6/

almost no understanding of how these work. Neurons are complex cells with complex internal signaling and changes in gene expression are clearly part of learning, again in ways we only dimly have glimpses of. What I am trying to say is your focus on what stores the info in 7/

long-term memories is missing 99+% of the action in what it would take to reconstruct a functioning mind. And most of that 99+% is yet unknown, so we have no way of saying what is critical to its preservation. 8/

Let me add, a lot of the examples you cite involve circuitry correlating with, being predictive of, receptive field (RF) properties. I’ve developed a number of the models of how circuits create various functional response properties of visual cortical cells. So I know these 9/

issues well. I believe we have learned a lot about how some basic functional response properties of V1 cells are constructed. But the fact is we still are very poor at predicting V1 responses to natural scenes, we only dimly understand how its responses are modulated by 10/

numerous other factors and areas, we have very limited understanding of how attention is achieved, and we haven’t even begun to understand how a unified visual percept is created out of ambiguous and sometimes rivalrous possibilities. In other words, we have lit up a bit of the darkness, but the surrounding darkness is vast, and we don’t even know how vast.”

Hayworth: “We know theoretically there is a big problem” -Sorry, sounds like literature I am unfamiliar with. Could I get an experimental ref? Experimentally determined cap differs by how much? Isn’t hippocampal replay an alternative solution to Fusi?

Miller: “See Fusi & Abbott 2007, Nature Neuro. Fusi, refs 5-7, identified problem w existing memory models given synapses with finite # of levels (bounded, finite resolution) – # memories scales as log(N) instead of N, where N is # of synapses. This paper more closely examines problem. 1/

This log(N) basically gives tradeoff between learning & forgetting. If learn quickly, must forget quickly. Fusi, Drew & Abbott 2005 and Benna & Fusi 2016 use complex internal synaptic states to ameliorate or solve problem.
Anyhow, I don’t really want to spend a lot more time arguing about these things. I would like you to correctly understand what I am saying, but don’t really have more energy for this otherwise.”

Hayworth: “I understand. I very much appreciate you taking the time to explain your objections. I think I understand them and will incorporate them into my thinking going forward. If you are going to be at SfN this year perhaps we can discuss more over a beer. I’ll buy the first round.”

The neuron doctrine: a historical example of the unexceptionalism of the brain

As one of my manifestations of intellectual contrarianism, I like to collect historical examples of times when a largish group of scientists thought that a complicated theory was the best way to explain a set of facts, but then a more simple explanation turned out to be much better.

I especially like examples of this in neuroscience, where people are wont to postulate complicated theories about the way that we think.

There is perhaps no better example than the debate between the reticular theory of the nervous system and the neuron doctrine.

The reticular theory postulated a form of exceptionalism in the nervous system: that axons and dendrites seen on light microscopy were not attached to cells but were in fact a separate, non-cellular entity, forming their own protoplasmic network.

The neuron doctrine is, at least in hindsight, much simpler, postulating that axons and dendrites are extensions of cells, as occurs in other types of biology.

Cajal’s drawing of neurons in the chick cerebellum, from Wikipedia

The reticular theory had many proponents, including Camillo Golgi and Franz Nissl, and lasted from 1840-1935. It’s easy to dismiss it now, but it was a reasonable idea at the time.

Now, though, it’s an good example of how theories that postulate that the brain is extremely complicated and different than other types of biology do not have a good track record.

Where the brain spends its energy

In investigating a crime, to pinpoint the culprit, the saying goes, “follow the money.” In science, the saying is (or at least, should be), “follow the ATP.”

A six month old paper acts as a nice review on this topic. The authors stratify tissue types based on the degree of myelination (none, developing, and adult). This is shown here,


  • action potential use is on voltage-gated Na+/K+-ATPases
  • synapse use is on postsynaptic membrane currents, presynaptic calcium entry, and neurotransmitter/vesicle cycling
  • oligodendrocyte resting potential use is continuous Na+/K+ pumps
  • housekeeping use is on protein/lipid synthesis and intracellular trafficking of molecules/organelles

That’s way more than I would have expected on housekeeping. But by far their most surprising finding is that the cost of maintaining the resting potentials in oligodendrocytes is so large that myelination doesn’t usually save energy on net–it depends on the firing rate of the neuron. That’s a heterodox bomb.

I suppose that myelination not leading to energy saving is weak evidence in favor of it doing something else, aside from speeding up spikes. Like, allowing for plasticity.


Harris; Atwood (2012). “The Energetics of CNS White Matter”Journal of NeuroscienceDOI:10.1523/JNEUROSCI.3430-11.2012

Synapse vacancies induce nearby axons to compete

During development, one axon typically comes to dominate each set of synaptic sites at a neuromuscular junction. This means that just one neuron controls each muscle fiber, allowing for specificity of motor function.

A nice application of laser irradiation allows researchers to intervene in the formation of axonal branches in developing mice to study this.

What they found was that irradiating the axon currently occupying the site spurred a sequence of events (presumably involving molecular signaling) that led nearby axons (often smaller ones) to take it over.

A 67 second, soundless video of one 1,000-step simulation of this process demonstrates the concepts behind this finding.

In the simulation, each circle represents a synaptic site, and each color an innervating axon. There are originally six colors.

At each of the 1,000 time steps, one axon is withdrawn from a randomly chosen site, and an adjacent one (possibly of the same color) takes it over.

The territory controlled by one axon increases (with negative acceleration) until it comes to dominate all the sites.

Although it is possible that a qualitatively different process occurs for axonal inputs to nerve cells, odds are that a similar sort of evolution via competition helps drive CNS phenomena such as memory. (Because evolution tends to re-use useful processes.)


Turney SG, Lichtman JW (2012) Reversing the Outcome of Synapse Elimination at Developing Neuromuscular Junctions In Vivo: Evidence for Synaptic Competition and Its Mechanism. PLoS Biol 10(6): e1001352. doi:10.1371/journal.pbio.1001352

The paradigm of differential network interactions

It is quite common in biology (and neuroscience, as a special case) for researchers to employ differential gene expression analysis, which produces lists of up- and down-regulated genes between a given set of conditions. And as Ideker and Krogan point out in their Jan ’12 paper, this principle has already been extended to differential protein expression and post-translational modifications.

The authors go on to discuss how this approach has also been applied, with less fanfare, to differential interaction network analysis. In this paradigm, if an interaction between nodes (e.g., protein concentrations) in the network is present above noise in one condition, but not another, then they would call that a differential interaction.

"Static genetic interaction maps are measured in each of two conditions (left)... Condition 1 is subtracted from condition 2 to create a differential interaction map (right)... In the differential map, weak but dynamic interactions (dotted edges) are magnified and persistent ‘housekeeping’ interactions are removed (bottom right)." ; doi:10.1038/msb.2011.99

Very similar ideas can be applied to the study of neuronal network function. If we can say that an interaction between neuronal “nodes” (which could be, depending upon the scale, neurons, cortical columns, or brain regions) is differentially present between healthy and disordered states, then it suggests that that interaction is somehow involved with the disorder.

This is not a perfect paradigm, in part because the network “connections” can be less representative of the physical reality than we’d like, but I anticipate that we have much to mine from it about the operations of the nervous system.


Ideker T, Krogan NJ. 2012 Differential network biology. doi:10.1038/msb.2011.99

How to make mathematical sense of connectomics data

…[C]onsider the example … regarding the significant resources and time being put into deciphering the structural connectome of the brain. This massive amount of accumulating data is qualitative, and although everyone agrees it is important and necessary to have it in order to ultimately understand the dynamics of the brain that emerges from the structural substrate represented by the connectome, it is not at all clear at present how to achieve this. Although there have been some initial attempts at using this data in quantitative analyses they are essentially mostly descriptive and offer little insights into how the brain actually works. A reductionist’s approach to studying the brain, no matter how much we learn and how much we know about the parts that make it up at any scale, will by itself never provide an understanding of the dynamics of brain function, which necessarily requires a quantitative, i.e., mathematical and physical, context.

That’s Gabriel Silva, more here, interesting throughout.


Silva GA (2011) The need for the emergence of mathematical neuroscience: beyond computation and simulation. Front. Comput. Neurosci. 5:51. doi: 10.3389/fncom.2011.00051

Inference by sampling in a model of ambiguous visual perception

Certain visual inputs can be consistently interpreted in more than one way. One classic example of this is the young-woman/old-woman puzzle:

"Boring figure", via Wikipedia user Bryan Derksen

An important finding related to these types of illusions is that we don’t perceive both possibilities at once, but rather switch spontaneously between them.

Buesing et al.’s recent study formalized a network model of spiking neurons, equivalent to sampling from a probability distribution, and used it on a quantifiable model of such visual ambiguity, binocular rivalry.

This allowed them to show how spontaneous switches between perceptual states can be caused by a sampling process which produces successively correlated samples.

In particular, they constructed a computational model with 217 neurons, and assigned each neuron a tuning curve with a preferred orientation such that the full set of orientations covered the entire 180° interval.

They then ran a simulation of these neurons according to their rules for spiking and refraction, computed the joint probability distribution, projected it in 2-d, and drew the endpoints of the projections as dots, shown below. They took samples every millisecond for 20 seconds of biological time.

the "prior distribution"; each colored dot is a sampled network state; the relative orientation of each dot corresponds to the primary orientation of the perception at that time point; a dot's distance from the origin encodes the perception's "strength"; doi:10.1371/journal.pcbi.1002211.g004 part d

Note that there is a fairly homogenous distribution across the whole orientation spectrum, indicating a lack of preference for one direction. You might think of the above as the resting state activity, as there was nothing to mimic external input to the system.

In order to add this input, the authors did another simulation in which they specified the states of a few of the neurons, “clamping” them to one value. In particular, they clamped two neurons with orientation preference ~45° to 1 (“firing”), two neurons with preference ~135° to 1, and four cells with preference ~90° to 0 (“not firing”).

Since the neurons set to firing are at opposite sides of the semicircle, this set-up mimics an ambiguous visual state. They then ran a simulation with the remaining 209 neurons as above, with the results shown below.

the "posterior distribution"; the black line shows the evolution of the network states z for 500 ms during a switch in perceptual state; doi:10.1371/journal.pcbi.1002211.g004 part e

As you can see, in this case the network samples preferentially from states that correspond to the clamped positions at either ~45° or ~135°. The black trace indicates that the network tends to remain in one high probability state for awhile and then shift rapidly to the other.

As compared to the above “prior” distribution, this “posterior” distribution has greatly reduced variance.

Although the ability of their network to explain perceptual bistability is fascinating, it is perhaps most interesting due to its broader implications for how cortical regions might be able to switch between cognitive states via sampling.


Buesing L, Bill J, Nessler B, Maass W (2011) Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLoS Comput Biol 7(11): e1002211. doi:10.1371/journal.pcbi.1002211

Model of a pyramidal neuron’s dendrite with one spine

Kim et al were interested in simulating the compartmentalization of signaling molecules involved in PKA-dependent LTP in the hippocampus. They wanted to know: does PKA need to be anchored near its target molecules, or near a source of activator molecules? They varied the location of PKA and one of its activator molecules (adenylyl cyclase) to try to determine this. It turns out that placing PKA near its activator molecules (i.e., source of cAMP) leads to more downstream activity than placing it near its target molecules.

Also of interest is a very cool model of a dendrite with one spine that they used for their stochastic simulations, which is below and could help you visualize these structures:

dotted lines are subvolumes used in the simulation; Ca influx is via voltage dependent calcium channels in the dendrite, and via NMDA receptors in the spine's post-synaptic density (PSD); ignore "C"; doi:10.1371/journal.pcbi.1002084

Since molecular simulation is computationally expensive, they only allow diffusion in 2-D in the dendrite (notice the lack of vertical dotted lines in the cross section) and 1-D in the spine. Further improvements to molecular simulation or computational power should one day make this sort of simplification unnecessary.


Kim M, Park AJ, Havekes R, Chay A, Guercio LA, et al. (2011) Colocalization of Protein Kinase A with Adenylyl Cyclase Enhances Protein Kinase A Activity during Induction of Long-Lasting Long-Term-Potentiation. PLoS Comput Biol 7(6): e1002084. doi:10.1371/journal.pcbi.1002084

How to describe a neural network model

In their lucid and educational ’09 paper, Nordlie et al attempt to create standards for the description of neural network models in the academic lit. This is a great idea–gains from standardization are huge–and also a great paper to learn about what a neural network model actually entails. Since this is in PLoS comp bio and, bless its editors, it is OA/CC, I will quote liberally. First, they have the following working definition of a model:

A neuronal network model is an explicit and specific hypothesis about the structure and microscopic dynamics of (a part of) the nervous system.


  • The model must be explicit, i.e., all aspects of the model must be specified.
  • The model must be specific, i.e., all aspects must be defined so detailed that they can be implemented unequivocally.
  • The model specifies the structure (placement and type of network elements; source, target and type of connections) and dynamics of components (ion channels, membrane potential, spike generation and propagation).
  • The model does not describe the dynamics of the model as a whole, which is an emerging property of the model.

Here is how their full description of what a model must entail:

A complete model description must cover at least the following three components: (i) The network architecture, i.e., the composition of the network from areas, layers, or neuronal sub-populations. (ii) The network connectivity, describing how neurons are connected among each other in the network. In most cases, connectivity will be given as a set of rules for generating the connections. (iii) The neuron and synapse models used in the network model, usually given by differential equations for the membrane potential and synaptic currents or conductances, rules for spike generation and post-spike reset. Model descriptions should also contain information about (iv) the input (stimuli) applied to the model and (v) the data recorded from the model, just as papers in experimental neuroscience do, since a reproduction of the simulations would otherwise become impossible.

The above is essential to a neural network model, while below are some of the useful steps for describing your model:

1) Include output data for each individual neuron type to test stimuli, as opposed to responses only from the whole network. This will avoid the scenario under which:

[R]esearchers who attempt to re-implement a model and find themselves unable to reproduce the results from a paper, will not be able to find out whether problems arise from neuron model implementations or from a wrong network setup.

2) Keep the description of your model and the explanation for why you chose your model separate, for the sake of clarity.

3) Describe the topology of the network in your model unambiguously. It may be best to describe this topology on basis of how the regions connect to one another. Or, if your network is of the human brain and is at a high enough level, you could use a publicly available, standard space, such as the one that the human connectome project should soon release.

4) In defining the connections between your neurons (i.e., how they are probabilistically generated), pay special attention to these three details:

  • May neurons connect to themselves?
  • May there be multiple connections between any pair of neurons?
  • Are connection targets chosen at random for a fixed sender neuron (divergent connection), senders chosen at random for fixed target (convergent connection), or are sender and receiver chosen at random for each connection?

One benefit of connectomics research is that it would allow neural networks to be run on real, validated data sets instead of on probabilistic connections, simplifying these descriptions.

5) Figures should be informative but not overwhelming. Nordlie et al draw a model of the thalamocortical pathway using diagram styles from three of the papers they surveyed, here:


The middle diagram is the most informative, as it has parameters (weights and probabilities) shown next to its connection lines, and line widths proportional to the product of weight and probability. Really, what would be ideal here is some sort of standardization, like in physics diagrams. (A little physics envy isn’t always a bad! thing) In particular, these are their suggestions:

  • Unordered populations are shown as circles;
  • Populations with spatial structure are shown as rectangles;
  • Pointed arrowheads represent excitatory, round ones inhibitory connections;
  • Arrows beginning/ending outside a population indicate that the arrows represent a set of connections with source/target neurons selected from the population;
  • Probabilistic connection patterns are shown as cut-off masks filled with connection probability as grayscale gradient; the pertaining arrows end on the outside of the mask.

6) Describe the equations for membrane potential, spike generation, spike detection, reset and refractory behavior using math as well as prose.

Anecdotally, it seems to me that systems biologists tend to use R while neuroscientists are more into MATLAB. This jives with the engineering feel of the neuro community, and I certainly don’t mean to start a programming language flame war, but I do wonder if moving towards to the open-access programs R or Python might be useful.

I truly learned a lot from this paper, and in case the authors ever read this post, I’d like to thank them for putting effort into writing it so carefully and clearly, and apologize for any mistakes I may have made in summarizing it.


Nordlie E, Gewaltig M-O, Plesser HE (2009) Towards Reproducible Descriptions of Neuronal Network Models. PLoS Comput Biol 5(8): e1000456. doi:10.1371/journal.pcbi.1000456

Developmental axon anisotropy of the hamster

Cahalane et al studied the hamster during its first 10 days of development, using a fluorescent dye to trace the growth of its axons.

One thing they noticed was a bias towards movement in the medial/lateral as opposed to and anterior/posterior axes of the flattened cortical hemisphere. They quantify this bias as anisotropy, using two delta functions defined on a circle. (Anisotropy is big in diffusion tensor imaging, too). When they compare the anisotropy in the development of grey matter vs white matter, they find that white matter is more anisotropic:

grey matter / cortex = gray dots; white mater = red triangles; doi:10.1371/journal.pone.0016113

By analyzing simulated networks they show the effects of anisotropy on growing axon connections to other nodes:

Each point is avg of 10 networks, each with 2500 nodes, 10 axons, and 1 mm avg axon length; doi:10.1371/journal.pone.0016113

They also consider the modularity of their networks. Formally, modules are non-overlapping communities delineated by their location. If chosen well, there should be more within- than between-community edges in a given module than expected due to chance. The authors find good evidence for modularity in their axon traces, mainly because there are so many short connections, which are increased when axons are more anisotropic.

This is a great way to quantify networks, and it would be nice to see this type of structural data correlated with function. For example, how do more modular networks act? One suggestion is that modular structures might lead to more specialization in sub-problems, increasing rapid adaptation to a specified goal. More modular tasks may take less effort, whereas more global tasks like working memory would take more effort.

This makes sense, but what’s the trade-off or downside to modularity? If modularity is so good, why isn’t the brain more modular? Possibly because given finite resources, specialization is antagonistic to plasticity.


Cahalane DJ, Clancy B, Kingsbury MA, Graf E, Sporns O, et al. (2011) Network Structure Implied by Initial Axon Outgrowth in Rodent Cortex: Empirical Measurement and Models. PLoS ONE 6(1): e16113. doi:10.1371/journal.pone.0016113

Meunier D, et al. 2010 Modular and hierarchically modular organization of brain networks. Frontiers in Neuro, link.