Correlating immunohistochemistry with serial block-face electron microscopy of neurons

In Talapka et al 2022, “Application of the mirror technique for block-face scanning electron microscopy”, the authors use a modified “mirror” technique to combine immunohistochemistry for labeling of dendrites and ultrastructural analysis in 3D-EM of osmicated sections. This relies on the finding that the surface of a tissue block can still be imaged using confocal microscopy. The authors show that the cell body of a somatostatin immunopositive neuron and one of the emerging dendrites can be clearly visualized and reconstructed after the use of their technique. It is likely that the dendritic arbor of a large number of neurons can be analyzed using this technique. The technique combines the advantages of a high-resolution approach and of a labeling method for specific cellular markers. The morphological preservation of the structures seen on the surfaces of tissue sections such as blood vessels will in part determine the quality of the images. Here is one of the figures from their paper:

image from

Integrating synchrotron microtomography with electron microscopy in the study of mammalian brain tissue

Bosch et al 2022, “Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy”, is an interesting study that brings X-ray microscopy to bear on the problem of correlating structure and function.

The authors studied hippocampal CA1 and olfactory bulb circuits via multiple imaging modalities, including 2-photon calcium imaging, X-ray microscopy, and serial block-face electron microscopy. In all cases, the imaging modalities had different strengths in identifying different circuit elements, and the authors were able to correlate structure and function in interesting ways. The interplay between structural, functional, and molecular-level data will be increasingly critical in systems neuroscience, and this study highlights some important points.

The authors should be commended on showing that X-ray microscopy can be used without causing significant damage on fixed and osmium/uranium/lead en-bloc EM embedded tissue, which is an important advance. The authors also showed that X-ray microscopy can be used at high resolution on thick mammalian brain tissues; this is important because X-ray microscopy has the potential to provide structural details at the level of individual dendrites, which is possible with volume electron microscopy but less easily scalable. Finally, the authors point out that staining protein and lipid distributions defines the ultrastructure of the tissue; this is an important point that is often missed.

Figure 4 from Bosch et al;

What can super-resolution light microscopy tell us about biomolecular brain preservation?

A lot. A nice example of the power of aldehyde fixatives to preserve fine molecular detail is Helm et al 2021.

Dendritic spines, which are often considered the functional units of neuronal circuits, strongly vary in size and shape. This study used electron microscopy, super-resolution microscopy, and quantitative proteomics to characterize > 47,000 spines at > 100 synaptic targets, helping to quantify variation in biomolecular composition across spines. Their study is amazing in part because of their technical advances, which allow for the beautiful visualization of biomolecules across neuronal membranes.

People often say that connectomics is not enough for brain information preservation because each dendrite has its own spread of ion channels. This distribution of ion channels will tell you whether a dendritic spike will occur, which is incredibly important to figure out synapse function.

If the local dendritic tree goes over a certain threshold of depolarization, then the local ion channels will open up and amplify what would have come in with the synapses alone. This also could synergize with clusters of synapses.

Theoretically, each neuron could have a unique spread of ion channels along dendrites, which could potentially make synaptic connectivity data alone insufficient, even if you have or can accurately infer synapse molecular information.

It’s hard to find examples of real evidence in the literature for or against how important these effects are. But the nonlinear effect of clusters of synapses is something that we potentially can’t account for with electron microscopy data alone.  

This is a reasonable/principled objection to the idea of brain information preservation via connectivity. Personally, I find it quite plausible. A way to address this objection is to say that super-resolution microscopy techniques like those used by Helm et al 2021 could be applied to decoding memories from fixed brain tissue via measuring biomolecules, without necessarily assuming that synapses alone will be sufficient.

Towards building an accurate brain molecular concentration database

An interesting study by Shichkova et al 2021, who perform proteomic/metabolomic profiling studies in different brain areas and cell types, integrate and normalize the data, and generate a Brain Molecular Atlas database. They then use this database to create more accurate representations of biomolecular systems that are simulation-ready.

An accurate molecular concentration database is a prerequisite for creating data-driven computational models of biochemical networks. The Brain Molecular Atlas that they present overcomes the obstacles of missing or inconsistent data to support systems biology research as a resource for biomolecular modeling.

Highly expressed protein networks in different cell types;

One way this is relevant to brain preservation is that we will need accurate molecular concentrations to build realistic simulations of brain networks and map engrams. This is because engrams are likely composed of many molecular species and pathways that need to be accurately modeled in their concentrations in order to create an accurate representation of the engram.

Engrams could be distributed across multiple brain regions and cell types, and likely have a large number of pathways involved. Accurate molecular concentrations in these different contexts would be essential to be able to map engrams without potential gaps or inaccuracies.

A Tale of Two APOE Alleles

Summary: For people born in the US around 1900, the genetic variant APOE ε4 was associated with longer lifespans. More recently, it has become associated with shorter lifespans and a higher risk of Alzheimer’s disease (AD). This may be because the environment has changed, and the burden of certain infectious diseases such as diarrhea have decreased substantially. If true, this may help us figure out how APOE ε4 contributes to the risk of AD.

A new study by Wright et al uses data from AncestryDNA to investigate the genetic basis of human lifespan. The majority of the individuals in this study (80%) were from the US.

They found only one gene, APOE, with SNPs that had significant associations with both age and lifespan. The APOE SNP they found that was associated with age was rs429358, which changes the amino acid at position 130 of the APOE protein and distinguishes APOE ε4 from APOE ε3/ε2. The APOE SNP they found to be most associated with lifespan, rs769449, is also highly correlated with APOE ε4.

APOE ε4 is, of course, the genetic variant that mediates the majority of genetic risk for AD.

What is particularly interesting about Wright et al’s data is that APOE has a differential effect on longevity based on birth cohort:

Screen Shot 2019-09-14 at 8.07.45 PM.png
Fig 5E Wright et al 2019

As the authors write: “APOE exhibited a negative effect on lifespan in older cohorts and a positive effect in younger cohorts… The minor allele at APOE [read: ε4] was at highest frequency for intermediate lifespan values (74-86 years). This pattern was most pronounced in the younger birth cohorts, and it suggested that this allele [ε4] (or a linked allele or alleles) confers a survival benefit early in life but a survival detriment later in life.”

The authors don’t speculate much about why APOE ε4 has this differential effect on longevity, but I get to speculate: that’s why I have a blog. Here’s my explanation, which borrows heavily from previous conversations I’ve had with the brilliant Dado Marcora.

In 2011, Oriá et al published an intriguing study looking at the effect of APOE ε4 polymorphisms and diarrheal outcomes in Brazilian shanty town children. They found that APOE ε4 was associated with the least diarrhea:

Screen Shot 2019-09-14 at 8.16.05 PM.png

The CDC has this amazing list of the most common causes of death in the US from 1900 to 1998. One of the things that’s striking about this data is how much more common diarrhea used to be in the US as a cause of death. In 1900, diarrhea, enteritis, and ulceration of the intestines is the third leading cause of death:

Screen Shot 2019-09-14 at 8.21.06 PM

But it starts dropping steadily, and by 1931 it’s the 10th leading cause of death:

Screen Shot 2019-09-14 at 8.22.46 PM

After that, it no longer appears in the top 10. My guess is that this is probably mostly due to cleaner water. According to the CDC: “In 1908, Jersey City, New Jersey was the first city in the United States to begin routine disinfection of community drinking water. Over the next decade, thousands of cities and towns across the United States followed suit in routinely disinfecting their drinking water, contributing to a dramatic decrease in disease across the country.”

Let’s assume that what I’m implying is true, that APOE ε4 used to help people in the US live longer by protecting them from diarrheal illnesses that stunt development. If so, it stands to reason that APOE ε3/ε2 might also protect against AD by modulating development.

There is some data to support this. For example, Dean et al 2014 found that “infant ε4 carriers had lower MWF and GMV measurements than noncarriers in precuneus, posterior/middle cingulate, lateral temporal, and medial occipitotemporal regions, areas preferentially affected by AD.” It may be wise to consider more heavily the developmental roots of AD.

Three challenges in interpreting neurogenesis data from banked human brains

One field where the methods of studying postmortem human brain tissue have been relevant recently is adult neurogenesis.

In 2018, Sorrells et al made a splash when they used brain samples from 37 donated brain samples and 22 neurosurgical specimens from people with epilepsy to suggest that neurogenesis only occurs at negligible levels during adulthood. This data seemed to contradict results from rodents.

Screen Shot 2019-09-08 at 2.56.44 PM
DCX staining in rats; Oomen et al 2009 10.1371/journal.pone.0003675

I recently came across Lucassen et al 2018, which critiques Sorrells et al 2018 on a few methodological grounds:

  1. Postmortem interval: Very little clinical data was made available for each brain donor in Sorrells et al, and the postmortem interval (PMI) was one of the omitted variables. The neurogenesis marker DCX appears to be broken down or otherwise be negative on staining shortly after death, so these extended PMIs could cause false negative for DCX staining. Lucassen et al also noted that there might be differential effects of PMI in old and young human brains, for example as a result of differences in myelination.
  2. Cause of death: Lucassen et al noted that certain causes of death, such as sepsis, might be more likely to cause a breakdown of protein post-translational modifications. In the case of the other neurogenesis marker studied, PSA-NCAM, its poly-sialic group might have been lost in hypoxic brains that have substantial perimortem lactic acid production and resulting acidity.
  3. Need for 3d data: Lucassen et al note that the individual EM images presented by Sorrells et al are difficult to interpret because brain cells have complicated, branching morphologies. Instead, they suggest that 3d reconstructions of serial EM images would be more dispositive. Creating 3d reconstructions is often more difficult to accomplish in postmortem human brain tissue compared to rodent brain tissue if the cell processes span a volume that is too large to be effectively preserved by immersion fixation and perfusion fixation is not possible.

I don’t know enough about human neurogenesis, DCX, PSA-NCAM, or the other areas discussed to know if Lucassen’s critiques mean that Sorrells et al’s data truly won’t replicate. But I found the methodological critiques to be valid and important.

Seven approaches for accelerating immersion fixation in brain banking

Immersion fixation of a human brain is fairly slow. One study found that it took an average of 32 days for single brain hemispheres immersion fixed in 10% formalin to be fully fixed (as proxied by achieving the minimum T2 value).

This means that fixative won’t reach the tissue in the innermost regions of the brain until a substantial amount of tissue disintegration has already occurred.

Here are a few approaches to speed up immersion fixation in brain banking protocols. For each approach, I’m also going to list a rough, completely arbitrary estimated probability that they would each actually speed up the fixation process, as well as some potential downsides of each.

1) Cutting the brain into smaller tissue sections prior to immersion fixation. This approach is the most common approach already used to speed up immersion fixation. It relies on the obvious idea that if you directly expose more of the tissue to the fixative, the process of fixation will finish faster. I list it here for completeness.

Probability of speeding up immersion fixation: Already demonstrated.

Downsides: Damage at the cut interfaces, difficulty in inferring how cellular processes correspond between segments, mechanical deformation, technical difficulty in cutting fresh brain tissue in a precise manner.

2) Using higher concentrations of fixative. This makes biophysical sense according to Fick’s law of diffusion, as a higher concentration gradient of fixative should increase its rate of diffusion into the tissue. One study found that 10% formalin led to a faster fixation rate in pig hearts, at least at the longest time interval studied (168 hours):

Screen Shot 2019-06-25 at 9.26.00 PM
Holda et al 2017; PMID: 29337978; FBPS = Formaldehyde phosphate buffered solution

If 10% is faster than 2% or 4%, then 100% formalin would likely be faster than 10%.

Probability of speeding up immersion fixation with 50-100% compared to 10% formalin: 95%

Downsides: 100% formalin could produce more toxic fumes, it is likely more expensive, and it is not as easily accessible. It could also lead to more overfixation (e.g. antigen masking) of outer surface regions, although it theoretically could reach parity on this measure if a shorter amount of time were used for the fixation.

3) Using the cerebral ventricles as an additional source of fixative immersion.

Screen Shot 2019-06-25 at 9.40.06 PM

If you can access the ventricles of the brain with a catheter or some other device, you could allow fixative to diffuse into the ventricles. This would allow for a substantially increased surface area from which fixatives can diffuse.

Because the cerebral ventricles are already there, using them allows for some of the advantages of the dissection approach without having to cut the brain tissue (other than the tissue damaged when placing the catheter(s) into the ventricles).

This approach can also be used when the brain is still inside of the skull, via the use of cranial shunts.

Access to the lateral ventricle is likely part of why immersion fixation is much faster after hemisecting the brain, which is already commonly done in brain banking protocols. 

Probability of speeding up immersion fixation: 50%. There are plenty of unknowns here. For example, are the ventricles already accessed through the cerebral aqueduct or canal when the brain is removed through the skull in standard immersion fixation? Do the ventricles collapse ex vivo or when the brain is taken out of the skull, rendering the approach much less effective? The uncertainty here should be attributed to my own ignorance of this literature, as other people are likely aware of the answers.

Downsides: Damage to parenchyma where the catheters are inserted, increased complexity of the procedure.

4) Using glyoxal or another fixative as a complementary agent. This is a pie-in-the-sky idea, but what about using a fixative other than formaldehyde? Glyoxal is one possibility. It has potential as an alternative fixative in terms of morphology preservation, and while it doesn’t seem to be quite as efficient a crosslinker as glutaraldehyde, it might diffuse faster because it is smaller. I haven’t been able to find good diffusion time measurements for glyoxal after a brief search. Glyoxal is also likely less toxic than formaldehyde.

Screen Shot 2019-06-25 at 9.57.16 PM

Why use it at all if it likely diffuses slower than formaldehyde? It’s not all about how quickly a fixative agent reaches the target tissue, but how efficiently it crosslinks once it gets there that is necessary to stop disintegration and stabilize the tissue. Glyoxal is the smallest dialdehyde so it might be a bit of a Goldilocks in the crosslinking efficiency vs diffusion speed trade-off. But, again, this is pie-in-the-sky and would need actual testing.

Probability of speeding up immersion fixation: 10% with glyoxal, 90% with some other fixative or combination of fixatives. It seems unlikely — but possible — that the first fixative ever used would just happen to be the best at immersion fixation of large tissue blocks.

Downsides: Other fixatives will likely be more expensive, less accessible, and cause artifacts that are harder to adjust for than the well-known ones caused by formaldehyde.

5) Ultrasound-enhanced delivery. Ultrasound has been shown to increase the speed of fixation in tissue blocks. One study found that ultrasound increased delivery speed of non-fixative chemicals (at the end of a catheter) by 2-3x. The mechanism is unknown, but could involve heat, which is already known to increase diffusion speed (not ideal, as this would also likely increase tissue degradation), and/or acoustic cavitation, a concept that I don’t fully understand, but which can apparently speed liquid diffusion directly.

Probability of speeding up immersion fixation: 50%. I’d like to see these studies done on more brain tissue and for them to be replicated. However, they are pretty promising.

Downsides: Ultrasound might itself damage cellular morphology and/or biomolecules. However, considering that ultrasound has also been used in vivo, eg for opening the blood-brain barrier, it is unlikely to cause too much damage to tissue ex vivo, at least when using the right parameter settings.

6) Convection-enhanced delivery. This technique, which has primarily been used in neurosurgery, involves inserting catheters into brain parenchyma in order to help distribute chemicals such as chemotherapeutic agents. There’s no reason why this couldn’t be leveraged for brain banking as well.

Certain areas of the brain, perhaps the innermost ones that would otherwise take forever to be fixed, could be chosen to have small catheters inserted, allowing local delivery of fixative.

This would allow for an increase in the “effective surface area” of the fixative while minimizing damage due to sectioning and allowing the brain to remain intact.

Probability of speeding up immersion fixation: 99%. It’s hard to see how using convective-enhanced delivery of fixatives with catheters inside of the brain parenchyma wouldn’t speed up immersion fixation, but since I’m not aware of studies on it, there may be some technical difficulties that I’m not recognizing.

Downsides: Damage to the brain tissue from inserting the catheters, potential build-up of fluid pockets of fixative near the catheter tip that could damage nearby tissue if the infusion rate is too high, increased complexity, cost, and time for the procedure.

7) Shaking or stirring the fixative continuously (added 8/18/2019). This will increase the speed of fixative in an analogous way to convection-enhanced delivery: it delivers a pressure gradient, but instead of being inside of the tissue, it is at the surface.

The optimal rate of shaking or stirring is TBD and will depend on various factors specific to the experiment. Among other factors, there is likely a trade-off between such light shaking that it doesn’t have an effect and such vigorous shaking that it will damage the brain tissue due to the translational motion, similar to a concussion.

Probability of speeding up immersion fixation: 99%. This approach makes perfect biophysical sense and it has already been shown to significantly increase fixation speed in freeze substitution. So it should very likely speed up the process of suprazero temperature fixation as well.

Downsides: Concussion-like damage to the brain, increased complexity, possible increase displacement of solutes within the brain.

Lower-dose haloperidol probably doesn’t cause an acute prolongation of the QT interval

One of the common considerations when prescribing haloperidol is whether it will prolong the QT interval. This is a measure of the heart rhythm on the EKG that correlates with one’s risk for serious arrhythmias such as torsades de pointes.


Earlier this year, van den Boogaard et al published one of the largest RCTs to compare haloperidol against placebo (700+ people in both groups).

Their main finding was that prophylactic haloperidol was not helpful for reducing the rate of delirium or improving mortality.

But one of their most interesting results was the safety data. This showed that their dose of haloperidol had no effect on the QT interval and caused no increased rates of extrapyramidal symptoms. Their regimen was haloperidol IV 2 mg every 8 hours, which is equivalent to ~ 10 mg oral haloperidol in one day.

The maximum QT interval was 465 ms in the 2 mg haloperidol group and 463 ms in the placebo group, a non-significant difference with a 95% CI for the difference of -2.0 to 5.0.

Notably, they excluded people with acute neurologic conditions (who may have been more likely to have cardiovascular problems) and people with QTc already > 500 ms, which makes generalization of this finding to those groups a bit tricky.

Anti-tau antibodies for the treatment of Alzheimer’s disease

One of the exciting alternatives to the amyloid immunotherapies in clinical trials for Alzheimer’s disease (AD) are anti-tau antibodies.

There are several of these drugs in earlier stages of development, although none that I know of in phase 3. To take two concrete examples, let’s focus in on BioGen’s two anti-tau immunotherapies:

  • BMS-986168/BIIB092 = an humanized IgG4 monoclonal antibody targeting extracellular tau
  • BIIB076 = a monoclonal antibody against both monomeric and fibrillar tau

Both of these drugs are also being tested in PSP, which is a relatively rare, classical familial tauopathy in a way that AD isn’t — because in PSP, the 1-5% of familial cases are known to be caused by certain MAPT mutations. Whereas I don’t know of well-validated genetic mutations in MAPT that are associated with increased risk of Alzheimer’s, except for some preliminary reports of small statistical associations, such as this one.

To try to force myself to be accountable and quantitative, what is my prediction for the probability that each of these two drugs will be approved by the FDA by the end of 2025? Same rules and disclosures as my previous post about this, but two years extended because these drugs are in earlier stages of development.

I’m going with 2.5% for BIIB092 (in phase II) and 1.5% for BIIB076 (still in phase I). Clearly abnormalities in tau proteins are highly associated with pathogenesis in AD, indeed more strongly associated than Aβ, and there have been a number of suggestions that the tau abnormalities are causal.

But in my opinion, we don’t know for sure yet that these tau abnormalities are truly causal, and that stopping tau aggregation will be helpful.

On one hand, if an anti-tau antibody works, why shouldn’t an anti-NFL antibody, or any of the other proteins that are markers of axonal damage in AD and are inversely associated with cognitive status? Maybe they all would, but this thought experiment is a bit troubling to me.

On the other hand, anti-tau antibodies have already been shown to be helpful in an APP-overexpressing AD mouse model, improving both cognitive function and the proportion of mushroom dendritic spines.

Screen Shot 2017-09-20 at 7.36.40 AM
Castillo-Carranza et al 2015 Fig 1D; TOMA = anti-tau oligomer-specific monoclonal antibody, Tg2576 = APP-overexpressing AD mutant mouse;

It is asking a lot, but I would be more confident about the clinical relevance of this type of mouse study if it were shown that immunotherapies against other protein markers of axon damage, such as anti-NFL antibodies, were not successful in ameliorating cognitive decline, as a negative control.

Certainly I will be rooting for these anti-tau drugs to be successful in clinical trials and I think they make a lot of sense, but like most AD drugs in development, my prediction is that they are a long shot.

Predictions for the FDA approval of Alzheimer’s drugs currently in development

After reading Phil Tetlock and Dan Gardner’s book Superforecasters, I’ve decided to try to make prospective, quantitative predictions about AD therapies currently in clinical trials with an endpoint of decreasing cognitive decline.


I have investments in S&P index funds but no individual stocks. I’m funded with an NIH training grant for AD. However, everything in this post is based on public information.

My (in-progress) thesis is on one aspect of the basic biology of AD, but I still don’t know feel that I know all that much about AD, which is such a broad topic. I certainly don’t want to make it seem like I’m calling myself an expert. Still, that seems like a good reason to make predictions: to create an incentive to learn more and hold myself accountable.


Most AD drugs fail, and the hardest barrier to entry is Phase III clinical trials. From 2002-2012, 1/54 drugs that were tested in phase III clinical trials were approved by the FDA (memantine was the only approval; data from here).

data: 10.1186/s13195-016-0207-9

Since 2012, there have been several additional high-profile phase III failures, including the amyloid immunotherapy drug Solanezumab and the BACE inhibitor Verubecestat, and no additional FDA approvals.

This is our reference class: we should expect that ~1.85% of the drugs in Phase III clinical trials are likely to be approved.

Maybe we can raise the probability of a generic AD drug approval a little bit now, since we presumably know more now about science, medicine, and AD in particular now than we did in past decades.

On the other hand, if our current theories driving AD drug development (such as the amyloid hypothesis) happen to be particularly misguided, then the probability of approval might be lower accurate than they were in the past. Plus, it’s plausible that the FDA has more experience and will make the evidence necessary for approval more rigorous now than they might have in the past.

Overall, I think 1-4% is a reasonable prior for an new therapy in a phase III AD clinical trial today.

You might be asking: shouldn’t there be more AD drugs approved by the FDA by chance? If all you need is two-tailed p < 0.05 and this should happen 2.5% of the time, why have only 1.85% of AD drugs been approved? Part of the reason is that FDA approval criteria is more strict than simply p < 0.05, and requires “independent substantiation.” That said, it’s daunting that the probability of phase III AD therapy approval is close to chance levels.

Note about my motivations

I, like most people, urgently hope that all of these drugs work. I’m not being critical about their chances because I want to them to fail, I’m doing it so that I can more build more accurate models of how the AD field operates.

Choosing drugs for evaluation

The amazing AlzForum has a great page where they list 22 therapeutics currently in phase III clinical trials. Of these, here are some that I will not be evaluating:

1. LMTM: has already failed in clinical trials.

2. Vitamin E: has already completed a clinical trial with some mild success.

3. AVP-786: this DXM-containing compound is primarily for treating disinhibition in AD, not cognition.

4. Aripiprazole: for psychosis in AD.

5. Brexpiprazole: for agitation in AD.

6. ITI-007: for agitation in AD.

7. Masitinib: A mast cell inhibitor, but the clinical trial for AD doesn’t seem to be updating anymore and I can’t find information about it online.

8. CPAP: A prevention trial with an unclear path (to me) towards FDA approval.

9. Crenezumab: I’m confused by the possible path to approval for this passive Aβ immunotherapy in the context of late-onset AD, since it has already failed in trials of mild to moderate AD. I’m not saying that it’s impossible, of course, just that I don’t understand well enough how it would work to assign a probability.

10. Gantenerumab: This drug also failed in a Phase III clinical trial already in early stage symptomatic patients, so I am confused by its path to FDA approval.

11. Solanezumab. In late 2016, this Aβ immunotherapy was found to have failed in clinical drugs.

12. Verubecestat. Merck’s BACE inhibitor just had its Phase III clinical trial stopped a few weeks ago.

13. Idalopirdine: Already failed in one Phase III clinical trial.

That leaves 9.

To cover my bases, I also did a search for “alzheimer | Open Studies | Phase 3” at, where these studies are registered.

Through this search, I found some others I’m not going to consider, because I only want to consider drugs that are going to fail due to lack of efficacy, instead of lack of interest.

1. Coconut oil: specifically the Fuel for Thought version. I will not make a prediction on its FDA approval status since according to the company’s website it is already “generally recognized as safe” by the FDA.

2. tCDS: This modality has one trial that I will not be considering since I don’t know if it has a large enough sample size to achieve FDA approval with only n = 100 in a Phase II/III study.

3. There’s a phase III trial of purified EPA, but I’m not going to consider that because it seems that it seems similar to coconut oil in terms of FDA approval.

4. Albumin/IVIg: There’s a study of albumin and IVIG plasmapharesis in AD, but it’s unclear if this trial is still ongoing, as it hasn’t been updated in almost two years. IVIg has previously been unsuccessful in AD.

5. Nasal insulin: I’m also confused about the path to monetization and FDA approval of this drug, so I’m not going to evaluate it.

However, I did find sodium oligomannurarate and JNJ-54861911 using this search, and I’m adding them to the list. Another 2 makes 11 total therapies for predictions.

Efficacy predictions 

1. Levetiracetam (a widely used, FDA-approved anti-epileptic drug). AFAIK this is not yet in phase III: there’s one trial in phase II, although there is a phase III trial planned. Let’s assume for the purposes of prediction that the MCI phase III trial happens, with a primary endpoint of decreasing the rate of cognitive decline.

The idea that AD might be related to circuit/neuronal network dysfunction is very much in the air right now, eg following the report last month that flashing light on the retinas at particular frequencies to induce gamma rhythms leads to dramatic cognitive improvements in the 5xFAD mouse model of AD.

Levetiracetam is already widely used in AD patients with seizures, making it likely to be safe.

It could be true that Levetiracetam really does affect Aβ processing on the cellular level and decrease Aβ levels. I don’t really buy this, even before one of the papers on which this was based was retracted.

There has been one study evaluating the effect of Levetiracetam in humans in a within-study design, but the effect is pretty weak and non-existent at the highest dose, for reasons that are unclear to me.

One could imagine that Levetiracetam were successful in reducing cognitive decline in individuals without overt seizures, it might make sense to reconceptualize one aspect of AD around something like microseizures. We know that seizures are much more likely following strokes and other brain injuries, making this a plausible hypothesis.

Probability of FDA approval by the end of 2023: 2%

2. ALZT-OP1 (combination of inhaled cromolyn and oral ibuprofen). Currently in a 600-early AD patient clinical trial. Trials of NSAIDs and ibuprofen have failed in phase III trials multiple times, despite epidemiologic evidence suggesting that they should be beneficial.

By adding inhaled cromolyn, another anti-inflammatory drug that is approved as an asthma prophylactic, the funders are hoping that their trial will be different. A nice 2015 study showed that IP cromolyn administration decreases soluble monomeric Aβ-42 by about half in the APPswe/PS1dE9 mouse model of AD. Otherwise, there’s not much else published.

If this works, it would really emphasize the importance of systemic (as opposed to brain-specific) inflammation in AD and maybe mast cells in particular. But given the previous failures of systemic anti-inflammatory treatments, it seems pretty unlikely.

Probability of FDA approval by the end of 2023: 1.5%

3. AZD3293 (Lilly’s BACE inhibitor). BACE is a critical part of the amyloid processing pathway, and this small molecule inhibits it. AZD3293 has been shown in human studies to robustly decrease plasma and CSF Aβ42 and soluble AβPP β.

AZD3293 is in two large clinical trials, NCT02245737 with estimated n = 2202 and NCT02783573 with estimated n = 1899. The trials are for relatively early AD, with requirements of MMSE > 21 and > 20, respectively.

What is great about Eli Lilly’s large investment in AZD3293 is that we will have a very good sense of whether and how effective this drug is, as well as the extent to which decreasing CSF/brain amyloid leads to improved cognition. Assuming they release the data that they generate for analysis (which they probably will), they deserve a lot of credit for this undertaking.

Unfortunately, the negative Verubecestat (Merck’s BACE inhibitor) trial results that came out a few weeks ago are disheartening for the prospect of BACE inhibition in AD. It’s also possible that off-target effects of BACE inhibition, such as myelination, may decrease cognitive decline to an equivalent or greater degree as the benefits from decreased Aβ production.

However, it’s still totally plausible that this drug will work where the Merck BACE inhibitor did not, since there will clearly be differences in their effects, including pharmacokinetics and off-target effects.

Overall this feels like another clear of the amyloid hypothesis: if you lower Aβ levels, will you reduce the rate of cognitive decline? If this drug also doesn’t work while reducing Aβ, it seems that it will really be time for the field to go back to the basics.

Probability of FDA approval by the end of 2023: 5%

4. Aducanumab (Aβ passive immunotherapy). This is a monoclonal antibody that preferentially binds parenchymal, aggregated forms of Aβ.

It was derived from healthy, older donors who were cognitively healthy, with the assumption that it may have helped prevent them from developing AD.

It is probably the most promising drug in AD right now and its dose-dependent amyloid and cognitive effects in humans were described in a Nature paper in September 2016.

This seems to be the consensus “most likely to work” of the current drugs in clinical trials. There is still some reason for skepticism, though.

First, it’s not entirely clear to me why this amyloid reduction technique works when so many other Aβ therapies have failed.

And while Fig 3 of the Nature shows some nice dose-dependent effects, the error bars are still pretty high.

There are currently two phase III clinical trials for the drug, each with n = 1350 participants, requiring a positive amyloid PET scan and CDR = 0.5 and MMSE 24-30, which is early in the disease process. Results in 2019 and 2020.

Probability of FDA approval by the end of 2023: 20%

5. Azeliragon (small molecule RAGE inhibitor). This drug failed clinical trials in the mid-2000s, but the lower dose may have shown an effect, and now it has been taken back to clinical trials at the lower dose for a phase III trials in participants with MMSE 21-26 and an MRI scan showing a diagnosis of probable AD.

If this drug works at the lower dose, it suggests that astrocyte and microglia inflammation are a particularly strong targets in AD.

Probability of FDA approval by the end of 2023: 2%

6. E2609 (Biogen’s BACE inhibitor). This small molecule has been shown to reduce Aβ in the CSF and serum of non-human primates. This is being tested in a large (n = 1330) phase III clinical trial. Results expected by 2020.

As with the other BACE inhibitors, it’s plausible but not clear to me why this drug should succeed where Verubecestat failed, so I will give it the same probability as AZD3293. However, it also has only one trial as opposed to AZD3293’s two, so it seems slightly less well powered to detect a small effect on improving cognition.

Probability of FDA approval by the end of 2023: 4%

7. Nilvadipine (L-type Ca channel blocker). This anti-hypertensive drug was previously considered a possible therapy for AD in the context of reducing blood pressure, which can decrease the rate of cognitive decline. It made a splash in 2011 when it was found to decrease Aβ in vitro, and it is currently in a Phase III trial that should report results by the end of this year.

Given the spate of Aβ therapy failures, the Aβ reduction is not as promising as it was 6 years ago, although the drug may have other effects and may also reduce cognitive decline through its effects on blood pressure.

Probability of FDA approval by the end of 2023: 2.5%

8. Sodium oligomannurarate. There is not much info about this drug online, besides the clinical trial notice and this phase II trial report from Medscape which notes that it did not meet the primary cognitive endpoint (ADAS-cog/12) in its phase II trial. Not much info to go on here.

Probability of FDA approval by the end of 2023: 0.75%

9. JNJ-54861911 (Janssen BACE inhibitor). Another BACE inhibitor that is currently in a phase II/III trial.

Probability of FDA approval by the end of 2023: 4%

10. CAD106 (Active Aβ immunotherapy) + CNP520 (BACE inhibitor). This is an active vaccination strategy for Aβ, which would be fantastic for the field if it worked, since it is likely to be much cheaper than passive Aβ immunotherapy.

These drugs are currently being tested in a large Phase 2 trial (n = 1340).

Overall, the probability here feels similar to the probability of the other Aβ therapies. The combination of the active immunotherapy alongside BACE inhibition makes this trial intriguing.

Probability of FDA approval by the end of 2023: 4.5% (that at least one of the two or the combination will be approved)

11. Pioglitazone (PPARγ agonist, insulin sensitizing, small molecule). This drug is approved to treat type 2 diabetes. PPARγ agonism has been shown to play a role in inflammatory processes in the brain.

It is being studied in an extremely large study (n = 3494) that is coupled with a genetic risk model that includes APOE e4 and TOMM40.

I think that this trial has some potential, based in part on mouse model data as well as a variety of data suggesting an interplay between hyperglycemia-associated toxicity and risk of AD.

However, most of the early phase human data has been negative, including a study by the NIA (n = 25) and a study by the University of Colorado (n = 68).

Another problem with this trial is that — to the best of my knowledge — TOMM40 variants are no longer thought to be strongly associated with the risk of AD.

That said, there are some really interesting possible mechanistic angles here, including a possible role for pioglitazone in regulating myelin phagocytosis by immune cells, which may interact with AD.

Probability of FDA approval by the end of 2023: 4% (prediction)

Screen Shot 2017-06-07 at 3.04.53 PM

Overall probability that at least one drug will be approved for cognition in late-onset AD within the next 6 years (not necessarily one of the above): 35%

Obviously, these predictions are highly correlated.

For example, if one of the remaining BACE inhibitors works, then that makes it more likely that others will too.

As another example, if any of the amyloid therapies finally work, then that makes it more likely that the others will.

If any of the drugs work, that makes it more likely that all of the others will, because maybe clinical trial strategies (eg, enrolling patients earlier in the disease process) are generally more apt than they were previously.

There’s also some uncertainty around how the FDA will work over the next 6 years. I’m talking about cognitive efficacy approvals, not biomarker approvals.

To be explicit: if a drug is approved preliminarily based on biomarkers but not cognitive efficacy, I’m not going to count it as an approval for the purposes of these predictions. I

‘ll note that I’m a bit nervous in making these predictions public. What if they are all horribly wrong?

But I hope that we will move towards a world where people make more quantitative public predictions and are incentivized to do so. Of course, I plan to evaluate these predictions in 6 years and hold myself accountable.