Feeds:
Posts
Comments

After reading Phil Tetlock and Dan Gardner’s book Superforecasters, I’ve decided to try to make prospective, quantitative predictions about AD therapies currently in clinical trials with an endpoint of decreasing cognitive decline.

Disclosures

I have investments in S&P index funds but no individual stocks. I’m funded with an NIH training grant for AD. However, everything in this post is based on public information.

My (in-progress) thesis is on one aspect of the basic biology of AD, but I still don’t know feel that I know all that much about AD, which is such a broad topic. I certainly don’t want to make it seem like I’m calling myself an expert. Still, that seems like a good reason to make predictions: to create an incentive to learn more and hold myself accountable.

Preamble

Most AD drugs fail, and the hardest barrier to entry is Phase III clinical trials. From 2002-2012, 1/54 drugs that were tested in phase III clinical trials were approved by the FDA (memantine was the only approval; data from here).

ad_drug_devo

data: 10.1186/s13195-016-0207-9

Since 2012, there have been several additional high-profile phase III failures, including the amyloid immunotherapy drug Solanezumab and the BACE inhibitor Verubecestat, and no additional FDA approvals.

This is our reference class: we should expect that ~1.85% of the drugs in Phase III clinical trials are likely to be approved.

Maybe we can raise the probability of a generic AD drug approval a little bit now, since we presumably know more now about science, medicine, and AD in particular now than we did in past decades.

On the other hand, if our current theories driving AD drug development (such as the amyloid hypothesis) happen to be particularly misguided, then the probability of approval might be lower accurate than they were in the past. Plus, it’s plausible that the FDA has more experience and will make the evidence necessary for approval more rigorous now than they might have in the past.

Overall, I think 1-4% is a reasonable prior for an new therapy in a phase III AD clinical trial today.

You might be asking: shouldn’t there be more AD drugs approved by the FDA by chance? If all you need is two-tailed p < 0.05 and this should happen 2.5% of the time, why have only 1.85% of AD drugs been approved? Part of the reason is that FDA approval criteria is more strict than simply p < 0.05, and requires “independent substantiation.” That said, it’s daunting that the probability of phase III AD therapy approval is close to chance levels.

Note about my motivations

I, like most people, urgently hope that all of these drugs work. I’m not being critical about their chances because I want to them to fail, I’m doing it so that I can more build more accurate models of how the AD field operates.

Choosing drugs for evaluation

The amazing AlzForum has a great page where they list 22 therapeutics currently in phase III clinical trials. Of these, here are some that I will not be evaluating:

1. LMTM: has already failed in clinical trials.

2. Vitamin E: has already completed a clinical trial with some mild success.

3. AVP-786: this DXM-containing compound is primarily for treating disinhibition in AD, not cognition.

4. Aripiprazole: for psychosis in AD.

5. Brexpiprazole: for agitation in AD.

6. ITI-007: for agitation in AD.

7. Masitinib: A mast cell inhibitor, but the clinical trial for AD doesn’t seem to be updating anymore and I can’t find information about it online.

8. CPAP: A prevention trial with an unclear path (to me) towards FDA approval.

9. Crenezumab: I’m confused by the possible path to approval for this passive Aβ immunotherapy in the context of late-onset AD, since it has already failed in trials of mild to moderate AD. I’m not saying that it’s impossible, of course, just that I don’t understand well enough how it would work to assign a probability.

10. Gantenerumab: This drug also failed in a Phase III clinical trial already in early stage symptomatic patients, so I am confused by its path to FDA approval.

11. Solanezumab. In late 2016, this Aβ immunotherapy was found to have failed in clinical drugs.

12. Verubecestat. Merck’s BACE inhibitor just had its Phase III clinical trial stopped a few weeks ago.

13. Idalopirdine: Already failed in one Phase III clinical trial.

That leaves 9.

To cover my bases, I also did a search for “alzheimer | Open Studies | Phase 3” at clinicaltrials.gov, where these studies are registered.

Through this search, I found some others I’m not going to consider, because I only want to consider drugs that are going to fail due to lack of efficacy, instead of lack of interest.

1. Coconut oil: specifically the Fuel for Thought version. I will not make a prediction on its FDA approval status since according to the company’s website it is already “generally recognized as safe” by the FDA.

2. tCDS: This modality has one trial that I will not be considering since I don’t know if it has a large enough sample size to achieve FDA approval with only n = 100 in a Phase II/III study.

3. There’s a phase III trial of purified EPA, but I’m not going to consider that because it seems that it seems similar to coconut oil in terms of FDA approval.

4. Albumin/IVIg: There’s a study of albumin and IVIG plasmapharesis in AD, but it’s unclear if this trial is still ongoing, as it hasn’t been updated in almost two years. IVIg has previously been unsuccessful in AD.

5. Nasal insulin: I’m also confused about the path to monetization and FDA approval of this drug, so I’m not going to evaluate it.

However, I did find sodium oligomannurarate and JNJ-54861911 using this search, and I’m adding them to the list. Another 2 makes 11 total therapies for predictions.

Efficacy predictions 

1. Levetiracetam (a widely used, FDA-approved anti-epileptic drug). AFAIK this is not yet in phase III: there’s one trial in phase II, although there is a phase III trial planned. Let’s assume for the purposes of prediction that the MCI phase III trial happens, with a primary endpoint of decreasing the rate of cognitive decline.

The idea that AD might be related to circuit/neuronal network dysfunction is very much in the air right now, eg following the report last month that flashing light on the retinas at particular frequencies to induce gamma rhythms leads to dramatic cognitive improvements in the 5xFAD mouse model of AD.

Levetiracetam is already widely used in AD patients with seizures, making it likely to be safe.

It could be true that Levetiracetam really does affect Aβ processing on the cellular level and decrease Aβ levels. I don’t really buy this, even before one of the papers on which this was based was retracted.

There has been one study evaluating the effect of Levetiracetam in humans in a within-study design, but the effect is pretty weak and non-existent at the highest dose, for reasons that are unclear to me.

One could imagine that Levetiracetam were successful in reducing cognitive decline in individuals without overt seizures, it might make sense to reconceptualize one aspect of AD around something like microseizures. We know that seizures are much more likely following strokes and other brain injuries, making this a plausible hypothesis.

Probability of FDA approval by the end of 2023: 2%

2. ALZT-OP1 (combination of inhaled cromolyn and oral ibuprofen). Currently in a 600-early AD patient clinical trial. Trials of NSAIDs and ibuprofen have failed in phase III trials multiple times, despite epidemiologic evidence suggesting that they should be beneficial.

By adding inhaled cromolyn, another anti-inflammatory drug that is approved as an asthma prophylactic, the funders are hoping that their trial will be different. A nice 2015 study showed that IP cromolyn administration decreases soluble monomeric Aβ-42 by about half in the APPswe/PS1dE9 mouse model of AD. Otherwise, there’s not much else published.

If this works, it would really emphasize the importance of systemic (as opposed to brain-specific) inflammation in AD and maybe mast cells in particular. But given the previous failures of systemic anti-inflammatory treatments, it seems pretty unlikely.

Probability of FDA approval by the end of 2023: 1.5%

3. AZD3293 (Lilly’s BACE inhibitor). BACE is a critical part of the amyloid processing pathway, and this small molecule inhibits it. AZD3293 has been shown in human studies to robustly decrease plasma and CSF Aβ42 and soluble AβPP β.

AZD3293 is in two large clinical trials, NCT02245737 with estimated n = 2202 and NCT02783573 with estimated n = 1899. The trials are for relatively early AD, with requirements of MMSE > 21 and > 20, respectively.

What is great about Eli Lilly’s large investment in AZD3293 is that we will have a very good sense of whether and how effective this drug is, as well as the extent to which decreasing CSF/brain amyloid leads to improved cognition. Assuming they release the data that they generate for analysis (which they probably will), they deserve a lot of credit for this undertaking.

Unfortunately, the negative Verubecestat (Merck’s BACE inhibitor) trial results that came out a few weeks ago are disheartening for the prospect of BACE inhibition in AD. It’s also possible that off-target effects of BACE inhibition, such as myelination, may decrease cognitive decline to an equivalent or greater degree as the benefits from decreased Aβ production.

However, it’s still totally plausible that this drug will work where the Merck BACE inhibitor did not, since there will clearly be differences in their effects, including pharmacokinetics and off-target effects.

Overall this feels like another clear of the amyloid hypothesis: if you lower Aβ levels, will you reduce the rate of cognitive decline? If this drug also doesn’t work while reducing Aβ, it seems that it will really be time for the field to go back to the basics.

Probability of FDA approval by the end of 2023: 5%

4. Aducanumab (Aβ passive immunotherapy). This is a monoclonal antibody that preferentially binds parenchymal, aggregated forms of Aβ.

It was derived from healthy, older donors who were cognitively healthy, with the assumption that it may have helped prevent them from developing AD.

It is probably the most promising drug in AD right now and its dose-dependent amyloid and cognitive effects in humans were described in a Nature paper in September 2016.

This seems to be the consensus “most likely to work” of the current drugs in clinical trials. There is still some reason for skepticism, though.

First, it’s not entirely clear to me why this amyloid reduction technique works when so many other Aβ therapies have failed.

And while Fig 3 of the Nature shows some nice dose-dependent effects, the error bars are still pretty high.

There are currently two phase III clinical trials for the drug, each with n = 1350 participants, requiring a positive amyloid PET scan and CDR = 0.5 and MMSE 24-30, which is early in the disease process. Results in 2019 and 2020.

Probability of FDA approval by the end of 2023: 20%

5. Azeliragon (small molecule RAGE inhibitor). This drug failed clinical trials in the mid-2000s, but the lower dose may have shown an effect, and now it has been taken back to clinical trials at the lower dose for a phase III trials in participants with MMSE 21-26 and an MRI scan showing a diagnosis of probable AD.

If this drug works at the lower dose, it suggests that astrocyte and microglia inflammation are a particularly strong targets in AD.

Probability of FDA approval by the end of 2023: 2%

6. E2609 (Biogen’s BACE inhibitor). This small molecule has been shown to reduce Aβ in the CSF and serum of non-human primates. This is being tested in a large (n = 1330) phase III clinical trial. Results expected by 2020.

As with the other BACE inhibitors, it’s plausible but not clear to me why this drug should succeed where Verubecestat failed, so I will give it the same probability as AZD3293. However, it also has only one trial as opposed to AZD3293’s two, so it seems slightly less well powered to detect a small effect on improving cognition.

Probability of FDA approval by the end of 2023: 4%

7. Nilvadipine (L-type Ca channel blocker). This anti-hypertensive drug was previously considered a possible therapy for AD in the context of reducing blood pressure, which can decrease the rate of cognitive decline. It made a splash in 2011 when it was found to decrease Aβ in vitro, and it is currently in a Phase III trial that should report results by the end of this year.

Given the spate of Aβ therapy failures, the Aβ reduction is not as promising as it was 6 years ago, although the drug may have other effects and may also reduce cognitive decline through its effects on blood pressure.

Probability of FDA approval by the end of 2023: 2.5%

8. Sodium oligomannurarate. There is not much info about this drug online, besides the clinical trial notice and this phase II trial report from Medscape which notes that it did not meet the primary cognitive endpoint (ADAS-cog/12) in its phase II trial. Not much info to go on here.

Probability of FDA approval by the end of 2023: 0.75%

9. JNJ-54861911 (Janssen BACE inhibitor). Another BACE inhibitor that is currently in a phase II/III trial.

Probability of FDA approval by the end of 2023: 4%

10. CAD106 (Active Aβ immunotherapy) + CNP520 (BACE inhibitor). This is an active vaccination strategy for Aβ, which would be fantastic for the field if it worked, since it is likely to be much cheaper than passive Aβ immunotherapy.

These drugs are currently being tested in a large Phase 2 trial (n = 1340).

Overall, the probability here feels similar to the probability of the other Aβ therapies. The combination of the active immunotherapy alongside BACE inhibition makes this trial intriguing.

Probability of FDA approval by the end of 2023: 4.5% (that at least one of the two or the combination will be approved)

11. Pioglitazone (PPARγ agonist, insulin sensitizing, small molecule). This drug is approved to treat type 2 diabetes. PPARγ agonism has been shown to play a role in inflammatory processes in the brain.

It is being studied in an extremely large study (n = 3494) that is coupled with a genetic risk model that includes APOE e4 and TOMM40.

I think that this trial has some potential, based in part on mouse model data as well as a variety of data suggesting an interplay between hyperglycemia-associated toxicity and risk of AD.

However, most of the early phase human data has been negative, including a study by the NIA (n = 25) and a study by the University of Colorado (n = 68).

Another problem with this trial is that — to the best of my knowledge — TOMM40 variants are no longer thought to be strongly associated with the risk of AD.

That said, there are some really interesting possible mechanistic angles here, including a possible role for pioglitazone in regulating myelin phagocytosis by immune cells, which may interact with AD.

Probability of FDA approval by the end of 2023: 4% (prediction)

Screen Shot 2017-06-07 at 3.04.53 PM

Overall probability that at least one drug will be approved for cognition in late-onset AD within the next 6 years (not necessarily one of the above): 35%

Obviously, these predictions are highly correlated.

For example, if one of the remaining BACE inhibitors works, then that makes it more likely that others will too.

As another example, if any of the amyloid therapies finally work, then that makes it more likely that the others will.

If any of the drugs work, that makes it more likely that all of the others will, because maybe clinical trial strategies (eg, enrolling patients earlier in the disease process) are generally more apt than they were previously.

There’s also some uncertainty around how the FDA will work over the next 6 years. I’m talking about cognitive efficacy approvals, not biomarker approvals.

To be explicit: if a drug is approved preliminarily based on biomarkers but not cognitive efficacy, I’m not going to count it as an approval for the purposes of these predictions. I

‘ll note that I’m a bit nervous in making these predictions public. What if they are all horribly wrong?

But I hope that we will move towards a world where people make more quantitative public predictions and are incentivized to do so. Of course, I plan to evaluate these predictions in 6 years and hold myself accountable.

A really nice article from Tyssowski et al. The authors did RNAseq on neurons that either were or were not stimulated with neural activity. They found that a subset of proteins (251) that have been previously described as “[neuronal] activity regulated genes” were able to predict the stimulation state of those neurons well above chance. Specifically: 92% of the time using nearest neighbor classification as measured by leave one out cross validation.

I’m interested in the broad question of “which RNAs/proteins are important for neuronal activity” and this set of activity regulated genes is pretty clearly within that set. Interestingly, it seems that the expression of these genes is pretty highly correlated (very similar chromatin states, transcription factors, etc), so I don’t think you would have to perfectly preserve ALL of them in order to allow for a high-fidelity preservation of information.

On that note, it’d be interesting if someone were to use this data to try to predict neuron stimulation state using the smallest set of activity regulated genes as necessary. For example, the 19 rapidly-induced activity regulated genes, including the non-transcription factors Arc and Amigo3, seem like they would punch above their weight in terms of predicting neuronal activation state.

Figure 6 from the paper

Arc expression and enhancer acetylation is stimulated after only 10 minutes of neuronal activity; http://biorxiv.org/content/early/2017/06/05/146282.full.pdf+html

It also suggests an experiment for any brain preservation procedure that purports to preserve gene expression important for neural activity: stimulate neural activity on a subset of neurons (probably in vitro, since it’s easier and should yield the same result), perform your brain preservation processing steps, attempt to measure the expression of these genes, and then see if you can distinguish between which neurons were stimulated or not on the basis of those measurements.

A nice study by Hildebrand et al. was published earlier this week, looking at the connectome of zebrafish larvae. As a reminder is what zebrafish larvae look like under the scanning microscope (this is one of my favorite images ever):

Screen Shot 2017-05-11 at 11.50.58 AM

Image of postnatal day 2 zebrafish larvae by Jurgen Berger and Mahendra Sonawane of the Max Planck Institute for Developmental Biology

In this study, they did brute force serial sectioning of postnatal day 5 zebrafish larvae which they fed into a silicon wafer machine:

Screen Shot 2017-05-11 at 11.58.19 AM

Hildebrand et al 2017

They then were able to use the serial EM images to reconstruct myelinated axons and create some beautiful images:

Screen Shot 2017-05-11 at 12.02.15 PM

Hildebrand et al 2017

They found that the paired myelinated axons across hemispheres were more symmetrical than expected.

This means that their positions are likely pre-specified by the zebrafish genome/epigenome, rather than shifting due to developmental/electrical activity, as is thought to occur in the development of most mammalian axons.

While that is an interesting finding, clearly the main advance of this article is a technical one: being able to generate serial EM data sets like this on a faster and more routine basis may soon help to revolutionize the study of neuroscience.

The history of neuroscience in general, and myelination in particular, is replete with comparisons between brains and computers.

For example, the first suggested function of myelin in the 1850s as an insulator of electricity was made by analogy to electric wires, which had just recently been developed.

In today’s high performance computers (“supercomputers”), one of the big bottlenecks in computer processing speed is communication between processors and memory units.

For example, one measure of computer communication speed is traversed edges per second (TEPS). This quantifies the speed with which data can be transferred between nodes of a computer graph.

A standard measure of TEPS is Graph500, which quantifies computer performance in a breadth-first search task on a large graph, and can require up to 1.1 PB of RAM. As of June 2016, these are the known supercomputers with the most TEPS:

Screen Shot 2017-04-18 at 1.43.01 PM

I’m pointing all of this out to give some concrete context about TEPS. Here’s the link to neuroscience: as AI Impacts discussed a couple of years ago, it seems that TEPS is a good way to quantify how fast brains can operate.

The best evidence for this is the growing body of data that memory and cognition require recurrent communication loops both within and between brain regions. For example, stereotyped electrical patterns with functional correlates can be seen in hippocampal-cortical and cortical-hippocampal-cortical circuits.

Here’s my point: we know that myelin is critical for regulating the speed at which between-brain region communication occurs. So, what we have learned about the importance of communication between processors in computers suggests that the degree of myelination is probably more important to human cognition than is commonly recognized. This in turn suggests:

  1. An explanation for why human cognition appears to be better in some ways than primates: human myelination patterns are much more delayed, allowing for more plasticity in development. Personally, I expect that this explain more human-primate differences in cognition than differences in neurons (granted I’m not an expert in this field!).
  2. Part of an explanation for why de- and dys-myelinating deficits, even when they are small, can affect cognition in profound ways.

 

A key question in the treatment of depression is: what is the probability that a given treatment will lead to a sustained remission of symptoms?

One of the largest, most famous studies to address this is called the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial.

From what I understand, the researchers designed the trial to mimic what might happen in a realistic clinical setting.

A patient diagnosed with major depressive disorder (MDD) might be first started on a first-line drug (citalopram). If that didn’t work (because the side effects were untolerable, or the symptoms persisted), then another drug would be chosen, and so on. Here is their algorithm:

Screen Shot 2017-04-02 at 7.56.13 PM

They used a unique randomization strategy, as participants in Level 2 could choose to opt out of the randomization blocks that entailed a) switching off of citalopram, b) augmenting citalopram with a different drug, and/or c) using cognitive therapy.

From the numbers above, you can see that most common option was for participants to opt out of cognitive therapy. This is probably in part accounted for by a selection bias for participants who would enter the citalopram-based trial in the first level.

One of the main outcomes was the remission rates from depression (defined as QIDS-SR16 score of <= 5) at the various stages:

  • For step 1, the remission rate for those not treated for their current episode was 43%, vs 36% for those already treated for their current episodes
  • For step 2, the remission rate was 30%
  • For step 3, the remission rate was 14%
  • For step 4, the remission rate was 13%
  • Assuming that none of the participants existed the study and stayed in treatment, the theoretical remission rate after a maximum of four treatment steps was 67%

They also looked at the 12-month follow up of these same participants. Of those, the proportion without a relapse (defined as QIDS-SR16 >= 11) was ~50% in the participants who had a remission of symptoms following step 1, and ~ 33% in the participants who had a remission of symptoms following step 2.

This data set has been analyzed in many other ways. For example, after unsuccessful treatment with the SSRI citalopram, there was no difference in the remission rates of buproprion, sertraline, and venlafaxine. On the other hand, augmenting citalopram with buproprion led to a greater reduction in symptoms and had fewer side effects compared with augmenting with buspirone.

 

What is the mechanism by which dendritic spines can change structure over a rapid time course? Though this may seem esoteric, it is probably how memories form and is thus utterly essential to neuroscience. Two new papers present some relevant data.

screen-shot-2016-10-14-at-8-56-04-am

Two-photon imaging data of dendritic spines, from Wikipedia User:Tmhoogland

First, as has been shown several times, Harward et al show that glutamate uncaging at single dendritic spines leads to a rapid increase in spine volume after only ~ 1 minute that degrades over a period of several more minutes:

screen-shot-2016-10-14-at-8-06-49-am

Harward et al; doi:10.1038/nature19766

Along the same time course as the dendritic spine volume increase, these authors also detected TrkB activation (using their amazing new FRET sensor), which was largely in the activated spine but also traveled to nearby spines and the dendrite itself:

screen-shot-2016-10-14-at-8-10-04-am

Harward et al; doi:10.1038/nature19766

 

In what is to me probably their most compelling experiment, they show that hippocampal slices without BDNF have highly impaired volume changes in response to glutamate, and that this can be rescued by the addition of BDNF:

screen-shot-2016-10-14-at-9-02-31-am

Harward et al; doi:10.1038/nature19766

They also present several lines of evidence that this is an autocrine mechanism, with BDNF released from spines by exosomes and binding to TrkB receptors on the same spine.

In a separate article in which most of the same authors contributed, they show that another protein, Rac1, is activated (ie, GTP-bound, leading to fluorescence) very quickly following glutamate uncaging at single spines:

 

screen-shot-2016-10-14-at-9-18-25-am

Hedrick et al; doi:10.1038/nature19784

They also show that a similar rapid course of activation following glutamate uncaging occurs for the other Rho GTPases Cdc42 and RhoA.

Interestingly, they also show that these proteins mediate synaptic crosstalk, whereby the activation of one dendritic spines causes nearby dendritic spines to increase in strength. After several more experiments, here is their diagram explaining this mechanism:

screen-shot-2016-10-14-at-9-24-27-am

Hedrick et al; doi:10.1038/nature19784

Overall I find their data trustworthy and important. The most interesting subsequent question for me is whether endogenous amounts of CaMKII, BDNF, TrkB, and Rho GTPase signaling components (e.g., Cdc42, RhoA, Rac1) vary across dendritic spines, and whether this helps mediate variability in spine-specific and spine neighbor-specific degrees of plasticity. My guess is that they do, but AFAICT it remains to be shown.

If it is true that spines, dendrites, and neurons vary in the expression and distribution of these proteins, then any attempt to build models of the brain, as well as models of individual brains that have any sort of dynamic component, probably need to measure and model the local densities of these protein mediators of plasticity.

CSF- and serum-borne autoantibodies against brain proteins are known to cause a wide range of cognitive sequelae due autoimmune attack. For example, when antibodies are raised against the protein LGI1, which is thought to act as a voltage-gated K+ channel, a common result is encephalopathy.

As a result, LGI1 is often included in autoimmune panels, along with several other proteins including CASPR2, NMDA and AMPA subunits, GABA-B receptors, GAD65, CRMp-5, ANNA-1, and ANNA-2.

Recently, Ariño et al presented a summary of 76 patients with LGI1-associated cognitive deterioration, 13% of which had forms of cognitive deterioration distinct from limbic encephalitis. At 2 years their major outcomes were:

  • 35% fully recovered
  • 35% regained independence but to baseline levels
  • 23% required assistance due to cognitive defects
  • 6% died

In mice, LGI1 is primary expressed at the RNA level in neurons at the RNA level, while in humans its expressed in both mature astrocytes and neurons (data from here and here), eg in the Darmanis et al 2015 human data set its actually expressed higher in astrocytes:

screen-shot-2016-10-10-at-6-46-25-pm

It might be interesting to see whether encephalopathies are generally only caused by autoantibodies against proteins expressed in neurons, or whether or cell type-expressed proteins can also lead to a similar clinical outcome.