Summary: For people born in the US around 1900, the genetic variant APOE ε4 was associated with longer lifespans. More recently, it has become associated with shorter lifespans and a higher risk of Alzheimer’s disease (AD). This may be because the environment has changed, and the burden of certain infectious diseases such as diarrhea have decreased substantially. If true, this may help us figure out how APOE ε4 contributes to the risk of AD.
A new study by Wright et al uses data from AncestryDNA to investigate the genetic basis of human lifespan. The majority of the individuals in this study (80%) were from the US.
They found only one gene, APOE, with SNPs that had significant associations with both age and lifespan. The APOE SNP they found that was associated with age was rs429358, which changes the amino acid at position 130 of the APOE protein and distinguishes APOE ε4 from APOE ε3/ε2. The APOE SNP they found to be most associated with lifespan, rs769449, is also highly correlated with APOE ε4.
What is particularly interesting about Wright et al’s data is that APOE has a differential effect on longevity based on birth cohort:
As the authors write: “APOE exhibited a negative effect on lifespan in older cohorts and a positive effect in younger cohorts… The minor allele at APOE [read: ε4] was at highest frequency for intermediate lifespan values (74-86 years). This pattern was most pronounced in the younger birth cohorts, and it suggested that this allele [ε4] (or a linked allele or alleles) confers a survival benefit early in life but a survival detriment later in life.”
The authors don’t speculate much about why APOE ε4 has this differential effect on longevity, but I get to speculate: that’s why I have a blog. Here’s my explanation, which borrows heavily from previous conversations I’ve had with the brilliant Dado Marcora.
In 2011, Oriá et al published an intriguing study looking at the effect of APOE ε4 polymorphisms and diarrheal outcomes in Brazilian shanty town children. They found that APOE ε4 was associated with the least diarrhea:
The CDC has this amazing list of the most common causes of death in the US from 1900 to 1998. One of the things that’s striking about this data is how much more common diarrhea used to be in the US as a cause of death. In 1900, diarrhea, enteritis, and ulceration of the intestines is the third leading cause of death:
But it starts dropping steadily, and by 1931 it’s the 10th leading cause of death:
After that, it no longer appears in the top 10. My guess is that this is probably mostly due to cleaner water. According to the CDC: “In 1908, Jersey City, New Jersey was the first city in the United States to begin routine disinfection of community drinking water. Over the next decade, thousands of cities and towns across the United States followed suit in routinely disinfecting their drinking water, contributing to a dramatic decrease in disease across the country.”
Let’s assume that what I’m implying is true, that APOE ε4 used to help people in the US live longer by protecting them from diarrheal illnesses that stunt development. If so, it stands to reason that APOE ε3/ε2 might also protect against AD by modulating development.
There is some data to support this. For example, Dean et al 2014 found that “infant ε4 carriers had lower MWF and GMV measurements than noncarriers in precuneus, posterior/middle cingulate, lateral temporal, and medial occipitotemporal regions, areas preferentially affected by AD.” It may be wise to consider more heavily the developmental roots of AD.
One field where the methods of studying postmortem human brain tissue have been relevant recently is adult neurogenesis.
In 2018, Sorrells et al made a splash when they used brain samples from 37 donated brain samples and 22 neurosurgical specimens from people with epilepsy to suggest that neurogenesis only occurs at negligible levels during adulthood. This data seemed to contradict results from rodents.
I recently came across Lucassen et al 2018, which critiques Sorrells et al 2018 on a few methodological grounds:
Postmortem interval: Very little clinical data was made available for each brain donor in Sorrells et al, and the postmortem interval (PMI) was one of the omitted variables. The neurogenesis marker DCX appears to be broken down or otherwise be negative on staining shortly after death, so these extended PMIs could cause false negative for DCX staining. Lucassen et al also noted that there might be differential effects of PMI in old and young human brains, for example as a result of differences in myelination.
Cause of death: Lucassen et al noted that certain causes of death, such as sepsis, might be more likely to cause a breakdown of protein post-translational modifications. In the case of the other neurogenesis marker studied, PSA-NCAM, its poly-sialic group might have been lost in hypoxic brains that have substantial perimortem lactic acid production and resulting acidity.
Need for 3d data: Lucassen et al note that the individual EM images presented by Sorrells et al are difficult to interpret because brain cells have complicated, branching morphologies. Instead, they suggest that 3d reconstructions of serial EM images would be more dispositive. Creating 3d reconstructions is often more difficult to accomplish in postmortem human brain tissue compared to rodent brain tissue if the cell processes span a volume that is too large to be effectively preserved by immersion fixation and perfusion fixation is not possible.
I don’t know enough about human neurogenesis, DCX, PSA-NCAM, or the other areas discussed to know if Lucassen’s critiques mean that Sorrells et al’s data truly won’t replicate. But I found the methodological critiques to be valid and important.
Immersion fixation of a human brain is fairly slow. One study found that it took an average of 32 days for single brain hemispheres immersion fixed in 10% formalin to be fully fixed (as proxied by achieving the minimum T2 value).
This means that fixative won’t reach the tissue in the innermost regions of the brain until a substantial amount of tissue disintegration has already occurred.
Here are a few approaches to speed up immersion fixation in brain banking protocols. For each approach, I’m also going to list a rough, completely arbitrary estimated probability that they would each actually speed up the fixation process, as well as some potential downsides of each.
1) Cutting the brain into smaller tissue sections prior to immersion fixation. This approach is the most common approach already used to speed up immersion fixation. It relies on the obvious idea that if you directly expose more of the tissue to the fixative, the process of fixation will finish faster. I list it here for completeness.
Probability of speeding up immersion fixation: Already demonstrated.
Downsides: Damage at the cut interfaces, difficulty in inferring how cellular processes correspond between segments, mechanical deformation, technical difficulty in cutting fresh brain tissue in a precise manner.
2) Using higher concentrations of fixative. This makes biophysical sense according to Fick’s law of diffusion, as a higher concentration gradient of fixative should increase its rate of diffusion into the tissue. One study found that 10% formalin led to a faster fixation rate in pig hearts, at least at the longest time interval studied (168 hours):
If 10% is faster than 2% or 4%, then 100% formalin would likely be faster than 10%.
Probability of speeding up immersion fixation with 50-100% compared to 10% formalin: 95%
Downsides: 100% formalin could produce more toxic fumes, it is likely more expensive, and it is not as easily accessible. It could also lead to more overfixation (e.g. antigen masking) of outer surface regions, although it theoretically could reach parity on this measure if a shorter amount of time were used for the fixation.
3) Using the cerebral ventricles as an additional source of fixative immersion.
If you can access the ventricles of the brain with a catheter or some other device, you could allow fixative to diffuse into the ventricles. This would allow for a substantially increased surface area from which fixatives can diffuse.
Because the cerebral ventricles are already there, using them allows for some of the advantages of the dissection approach without having to cut the brain tissue (other than the tissue damaged when placing the catheter(s) into the ventricles).
Access to the lateral ventricle is likely part of why immersion fixation is much faster after hemisecting the brain, which is already commonly done in brain banking protocols.
Probability of speeding up immersion fixation: 50%. There are plenty of unknowns here. For example, are the ventricles already accessed through the cerebral aqueduct or canal when the brain is removed through the skull in standard immersion fixation? Do the ventricles collapse ex vivo or when the brain is taken out of the skull, rendering the approach much less effective? The uncertainty here should be attributed to my own ignorance of this literature, as other people are likely aware of the answers.
Downsides: Damage to parenchyma where the catheters are inserted, increased complexity of the procedure.
Why use it at all if it likely diffuses slower than formaldehyde? It’s not all about how quickly a fixative agent reaches the target tissue, but how efficiently it crosslinks once it gets there that is necessary to stop disintegration and stabilize the tissue. Glyoxal is the smallest dialdehyde so it might be a bit of a Goldilocks in the crosslinking efficiency vs diffusion speed trade-off. But, again, this is pie-in-the-sky and would need actual testing.
Probability of speeding up immersion fixation: 10% with glyoxal, 90% with some other fixative or combination of fixatives. It seems unlikely — but possible — that the first fixative ever used would just happen to be the best at immersion fixation of large tissue blocks.
Downsides: Other fixatives will likely be more expensive, less accessible, and cause artifacts that are harder to adjust for than the well-known ones caused by formaldehyde.
5) Ultrasound-enhanced delivery. Ultrasound has been shown to increase the speed of fixation in tissue blocks. One study found that ultrasound increased delivery speed of non-fixative chemicals (at the end of a catheter) by 2-3x. The mechanism is unknown, but could involve heat, which is already known to increase diffusion speed (not ideal, as this would also likely increase tissue degradation), and/or acoustic cavitation, a concept that I don’t fully understand, but which can apparently speed liquid diffusion directly.
Probability of speeding up immersion fixation: 50%. I’d like to see these studies done on more brain tissue and for them to be replicated. However, they are pretty promising.
Downsides: Ultrasound might itself damage cellular morphology and/or biomolecules. However, considering that ultrasound has also been used in vivo, eg for opening the blood-brain barrier, it is unlikely to cause too much damage to tissue ex vivo, at least when using the right parameter settings.
6) Convection-enhanced delivery. This technique, which has primarily been used in neurosurgery, involves inserting catheters into brain parenchyma in order to help distribute chemicals such as chemotherapeutic agents. There’s no reason why this couldn’t be leveraged for brain banking as well.
Certain areas of the brain, perhaps the innermost ones that would otherwise take forever to be fixed, could be chosen to have small catheters inserted, allowing local delivery of fixative.
This would allow for an increase in the “effective surface area” of the fixative while minimizing damage due to sectioning and allowing the brain to remain intact.
Probability of speeding up immersion fixation: 99%. It’s hard to see how using convective-enhanced delivery of fixatives with catheters inside of the brain parenchyma wouldn’t speed up immersion fixation, but since I’m not aware of studies on it, there may be some technical difficulties that I’m not recognizing.
Downsides: Damage to the brain tissue from inserting the catheters, potential build-up of fluid pockets of fixative near the catheter tip that could damage nearby tissue if the infusion rate is too high, increased complexity, cost, and time for the procedure.
7) Shaking or stirring the fixative continuously (added 8/18/2019). This will increase the speed of fixative in an analogous way to convection-enhanced delivery: it delivers a pressure gradient, but instead of being inside of the tissue, it is at the surface.
The optimal rate of shaking or stirring is TBD and will depend on various factors specific to the experiment. Among other factors, there is likely a trade-off between such light shaking that it doesn’t have an effect and such vigorous shaking that it will damage the brain tissue due to the translational motion, similar to a concussion.
Probability of speeding up immersion fixation: 99%. This approach makes perfect biophysical sense and it has already been shown to significantly increase fixation speed in freeze substitution. So it should very likely speed up the process of suprazero temperature fixation as well.
Downsides: Concussion-like damage to the brain, increased complexity, possible increase displacement of solutes within the brain.
One of the common considerations when prescribing haloperidol is whether it will prolong the QT interval. This is a measure of the heart rhythm on the EKG that correlates with one’s risk for serious arrhythmias such as torsades de pointes.
Earlier this year, van den Boogaard et al published one of the largest RCTs to compare haloperidol against placebo (700+ people in both groups).
Their main finding was that prophylactic haloperidol was not helpful for reducing the rate of delirium or improving mortality.
But one of their most interesting results was the safety data. This showed that their dose of haloperidol had no effect on the QT interval and caused no increased rates of extrapyramidal symptoms. Their regimen was haloperidol IV 2 mg every 8 hours, which is equivalent to ~ 10 mg oral haloperidol in one day.
The maximum QT interval was 465 ms in the 2 mg haloperidol group and 463 ms in the placebo group, a non-significant difference with a 95% CI for the difference of -2.0 to 5.0.
Notably, they excluded people with acute neurologic conditions (who may have been more likely to have cardiovascular problems) and people with QTc already > 500 ms, which makes generalization of this finding to those groups a bit tricky.
BIIB076 = a monoclonal antibody against both monomeric and fibrillar tau
Both of these drugs are also being tested in PSP, which is a relatively rare, classical familial tauopathy in a way that AD isn’t — because in PSP, the 1-5% of familial cases are known to be caused by certain MAPT mutations. Whereas I don’t know of well-validated genetic mutations in MAPT that are associated with increased risk of Alzheimer’s, except for some preliminary reports of small statistical associations, such as this one.
To try to force myself to be accountable and quantitative, what is my prediction for the probability that each of these two drugs will be approved by the FDA by the end of 2025? Same rules and disclosures as my previous post about this, but two years extended because these drugs are in earlier stages of development.
I’m going with 2.5% for BIIB092 (in phase II) and 1.5% for BIIB076 (still in phase I). Clearly abnormalities in tau proteins are highly associated with pathogenesis in AD, indeed more strongly associated than Aβ, and there have been a number of suggestions that the tau abnormalities are causal.
But in my opinion, we don’t know for sure yet that these tau abnormalities are truly causal, and that stopping tau aggregation will be helpful.
On one hand, if an anti-tau antibody works, why shouldn’t an anti-NFL antibody, or any of the other proteins that are markers of axonal damage in AD and are inversely associated with cognitive status? Maybe they all would, but this thought experiment is a bit troubling to me.
On the other hand, anti-tau antibodies have already been shown to be helpful in an APP-overexpressing AD mouse model, improving both cognitive function and the proportion of mushroom dendritic spines.
It is asking a lot, but I would be more confident about the clinical relevance of this type of mouse study if it were shown that immunotherapies against other protein markers of axon damage, such as anti-NFL antibodies, were not successful in ameliorating cognitive decline, as a negative control.
Certainly I will be rooting for these anti-tau drugs to be successful in clinical trials and I think they make a lot of sense, but like most AD drugs in development, my prediction is that they are a long shot.
After reading Phil Tetlock and Dan Gardner’s book Superforecasters, I’ve decided to try to make prospective, quantitative predictions about AD therapies currently in clinical trials with an endpoint of decreasing cognitive decline.
I have investments in S&P index funds but no individual stocks. I’m funded with an NIH training grant for AD. However, everything in this post is based on public information.
My (in-progress) thesis is on one aspect of the basic biology of AD, but I still don’t know feel that I know all that much about AD, which is such a broad topic. I certainly don’t want to make it seem like I’m calling myself an expert. Still, that seems like a good reason to make predictions: to create an incentive to learn more and hold myself accountable.
Most AD drugs fail, and the hardest barrier to entry is Phase III clinical trials. From 2002-2012, 1/54 drugs that were tested in phase III clinical trials were approved by the FDA (memantine was the only approval; data from here).
Since 2012, there have been several additional high-profile phase III failures, including the amyloid immunotherapy drug Solanezumab and the BACE inhibitor Verubecestat, and no additional FDA approvals.
This is our reference class: we should expect that ~1.85% of the drugs in Phase III clinical trials are likely to be approved.
Maybe we can raise the probability of a generic AD drug approval a little bit now, since we presumably know more now about science, medicine, and AD in particular now than we did in past decades.
On the other hand, if our current theories driving AD drug development (such as the amyloid hypothesis) happen to be particularly misguided, then the probability of approval might be lower accurate than they were in the past. Plus, it’s plausible that the FDA has more experience and will make the evidence necessary for approval more rigorous now than they might have in the past.
Overall, I think 1-4% is a reasonable prior for an new therapy in a phase III AD clinical trial today.
You might be asking: shouldn’t there be more AD drugs approved by the FDA by chance? If all you need is two-tailed p < 0.05 and this should happen 2.5% of the time, why have only 1.85% of AD drugs been approved? Part of the reason is that FDA approval criteria is more strict than simply p < 0.05, and requires “independent substantiation.” That said, it’s daunting that the probability of phase III AD therapy approval is close to chance levels.
Note about my motivations
I, like most people, urgently hope that all of these drugs work. I’m not being critical about their chances because I want to them to fail, I’m doing it so that I can more build more accurate models of how the AD field operates.
Choosing drugs for evaluation
The amazing AlzForum has a great page where they list 22 therapeutics currently in phase III clinical trials. Of these, here are some that I will not be evaluating:
7. Masitinib: A mast cell inhibitor, but the clinical trial for AD doesn’t seem to be updating anymore and I can’t find information about it online.
8. CPAP: A prevention trial with an unclear path (to me) towards FDA approval.
9. Crenezumab: I’m confused by the possible path to approval for this passive Aβ immunotherapy in the context of late-onset AD, since it has already failed in trials of mild to moderate AD. I’m not saying that it’s impossible, of course, just that I don’t understand well enough how it would work to assign a probability.
10. Gantenerumab: This drug also failed in a Phase III clinical trial already in early stage symptomatic patients, so I am confused by its path to FDA approval.
11. Solanezumab. In late 2016, this Aβ immunotherapy was found to have failed in clinical drugs.
12. Verubecestat. Merck’s BACE inhibitor just had its Phase III clinical trial stopped a few weeks ago.
13. Idalopirdine: Already failed in one Phase III clinical trial.
That leaves 9.
To cover my bases, I also did a search for “alzheimer | Open Studies | Phase 3” at clinicaltrials.gov, where these studies are registered.
Through this search, I found some others I’m not going to consider, because I only want to consider drugs that are going to fail due to lack of efficacy, instead of lack of interest.
1. Coconut oil: specifically the Fuel for Thought version. I will not make a prediction on its FDA approval status since according to the company’s website it is already “generally recognized as safe” by the FDA.
2. tCDS: This modality has one trial that I will not be considering since I don’t know if it has a large enough sample size to achieve FDA approval with only n = 100 in a Phase II/III study.
3. There’s a phase III trial of purified EPA, but I’m not going to consider that because it seems that it seems similar to coconut oil in terms of FDA approval.
5. Nasal insulin: I’m also confused about the path to monetization and FDA approval of this drug, so I’m not going to evaluate it.
However, I did find sodium oligomannurarate and JNJ-54861911 using this search, and I’m adding them to the list. Another 2 makes 11 total therapies for predictions.
1. Levetiracetam (a widely used, FDA-approved anti-epileptic drug). AFAIK this is not yet in phase III: there’s one trial in phase II, although there is a phase III trial planned. Let’s assume for the purposes of prediction that the MCI phase III trial happens, with a primary endpoint of decreasing the rate of cognitive decline.
The idea that AD might be related to circuit/neuronal network dysfunction is very much in the air right now, eg following the report last month that flashing light on the retinas at particular frequencies to induce gamma rhythms leads to dramatic cognitive improvements in the 5xFAD mouse model of AD.
Levetiracetam is already widelyused in AD patients with seizures, making it likely to be safe.
One could imagine that Levetiracetam were successful in reducing cognitive decline in individuals without overt seizures, it might make sense to reconceptualize one aspect of AD around something like microseizures. We know that seizures are much more likely following strokes and other brain injuries, making this a plausible hypothesis.
Probability of FDA approval by the end of 2023: 2%
By adding inhaled cromolyn, another anti-inflammatory drug that is approved as an asthma prophylactic, the funders are hoping that their trial will be different. A nice 2015 study showed that IP cromolyn administration decreases soluble monomeric Aβ-42 by about half in the APPswe/PS1dE9 mouse model of AD. Otherwise, there’s not much else published.
If this works, it would really emphasize the importance of systemic (as opposed to brain-specific) inflammation in AD and maybe mast cells in particular. But given the previous failures of systemic anti-inflammatory treatments, it seems pretty unlikely.
Probability of FDA approval by the end of 2023: 1.5%
3. AZD3293 (Lilly’s BACE inhibitor). BACE is a critical part of the amyloid processing pathway, and this small molecule inhibits it. AZD3293 has been shown in human studies to robustly decrease plasma and CSF Aβ42 and soluble AβPP β.
AZD3293 is in two large clinical trials, NCT02245737 with estimated n = 2202 and NCT02783573 with estimated n = 1899. The trials are for relatively early AD, with requirements of MMSE > 21 and > 20, respectively.
What is great about Eli Lilly’s large investment in AZD3293 is that we will have a very good sense of whether and how effective this drug is, as well as the extent to which decreasing CSF/brain amyloid leads to improved cognition. Assuming they release the data that they generate for analysis (which they probably will), they deserve a lot of credit for this undertaking.
Unfortunately, the negative Verubecestat (Merck’s BACE inhibitor) trial results that came out a few weeks ago are disheartening for the prospect of BACE inhibition in AD. It’s also possible that off-target effects of BACE inhibition, such as myelination, may decrease cognitive decline to an equivalent or greater degree as the benefits from decreased Aβ production.
However, it’s still totally plausible that this drug will work where the Merck BACE inhibitor did not, since there will clearly be differences in their effects, including pharmacokinetics and off-target effects.
Overall this feels like another clear of the amyloid hypothesis: if you lower Aβ levels, will you reduce the rate of cognitive decline? If this drug also doesn’t work while reducing Aβ, it seems that it will really be time for the field to go back to the basics.
Probability of FDA approval by the end of 2023: 5%
4. Aducanumab (Aβ passive immunotherapy). This is a monoclonal antibody that preferentially binds parenchymal, aggregated forms of Aβ.
It was derived from healthy, older donors who were cognitively healthy, with the assumption that it may have helped prevent them from developing AD.
It is probably the most promising drug in AD right now and its dose-dependent amyloid and cognitive effects in humans were described in a Nature paper in September 2016.
This seems to be the consensus “most likely to work” of the current drugs in clinical trials. There is still some reason for skepticism, though.
First, it’s not entirely clear to me why this amyloid reduction technique works when so many other Aβ therapies have failed.
And while Fig 3 of the Nature shows some nice dose-dependent effects, the error bars are still pretty high.
There are currently two phase III clinical trials for the drug, each with n = 1350 participants, requiring a positive amyloid PET scan and CDR = 0.5 and MMSE 24-30, which is early in the disease process. Results in 2019 and 2020.
Probability of FDA approval by the end of 2023: 20%
5. Azeliragon(small molecule RAGE inhibitor). This drug failed clinical trials in the mid-2000s, but the lower dose may have shown an effect, and now it has been taken back to clinical trials at the lower dose for a phase III trials in participants with MMSE 21-26 and an MRI scan showing a diagnosis of probable AD.
If this drug works at the lower dose, it suggests that astrocyte and microglia inflammation are a particularly strong targets in AD.
Probability of FDA approval by the end of 2023: 2%
6. E2609 (Biogen’s BACE inhibitor). This small molecule has been shown to reduce Aβ in the CSF and serum of non-human primates. This is being tested in a large (n = 1330) phase III clinical trial. Results expected by 2020.
As with the other BACE inhibitors, it’s plausible but not clear to me why this drug should succeed where Verubecestat failed, so I will give it the same probability as AZD3293. However, it also has only one trial as opposed to AZD3293’s two, so it seems slightly less well powered to detect a small effect on improving cognition.
Probability of FDA approval by the end of 2023: 4%
7. Nilvadipine(L-type Ca channel blocker). This anti-hypertensive drug was previously considered a possible therapy for AD in the context of reducing blood pressure, which can decrease the rate of cognitive decline. It made a splash in 2011 when it was found to decrease Aβ in vitro, and it is currently in a Phase III trial that should report results by the end of this year.
Given the spate of Aβ therapy failures, the Aβ reduction is not as promising as it was 6 years ago, although the drug may have other effects and may also reduce cognitive decline through its effects on blood pressure.
Probability of FDA approval by the end of 2023: 2.5%
8. Sodium oligomannurarate. There is not much info about this drug online, besides the clinical trial notice and this phase II trial report from Medscape which notes that it did not meet the primary cognitive endpoint (ADAS-cog/12) in its phase II trial. Not much info to go on here.
Probability of FDA approval by the end of 2023: 0.75%
9. JNJ-54861911 (Janssen BACE inhibitor). Another BACE inhibitor that is currently in a phase II/III trial.
Probability of FDA approval by the end of 2023: 4%
10. CAD106(Active Aβ immunotherapy) + CNP520(BACE inhibitor). This is an active vaccination strategy for Aβ, which would be fantastic for the field if it worked, since it is likely to be much cheaper than passive Aβ immunotherapy.
Overall, the probability here feels similar to the probability of the other Aβ therapies. The combination of the active immunotherapy alongside BACE inhibition makes this trial intriguing.
Probability of FDA approval by the end of 2023: 4.5% (that at least one of the two or the combination will be approved)
11. Pioglitazone (PPARγ agonist, insulin sensitizing, small molecule). This drug is approved to treat type 2 diabetes. PPARγ agonism has been shown to play a role in inflammatory processes in the brain.
It is being studied in an extremely large study (n = 3494) that is coupled with a genetic risk model that includes APOE e4 and TOMM40.
I think that this trial has some potential, based in part on mouse model data as well as a variety of data suggesting an interplay between hyperglycemia-associated toxicity and risk of AD.
Probability of FDA approval by the end of 2023: 4% (prediction)
Overall probability that at least one drug will be approved for cognition in late-onset AD within the next 6 years (not necessarily one of the above): 35%
Obviously, these predictions are highly correlated.
For example, if one of the remaining BACE inhibitors works, then that makes it more likely that others will too.
As another example, if any of the amyloid therapies finally work, then that makes it more likely that the others will.
If any of the drugs work, that makes it more likely that all of the others will, because maybe clinical trial strategies (eg, enrolling patients earlier in the disease process) are generally more apt than they were previously.
There’s also some uncertainty around how the FDA will work over the next 6 years. I’m talking about cognitive efficacy approvals, not biomarker approvals.
To be explicit: if a drug is approved preliminarily based on biomarkers but not cognitive efficacy, I’m not going to count it as an approval for the purposes of these predictions. I
‘ll note that I’m a bit nervous in making these predictions public. What if they are all horribly wrong?
But I hope that we will move towards a world where people make more quantitative public predictions and are incentivized to do so. Of course, I plan to evaluate these predictions in 6 years and hold myself accountable.
David Bennett has an interesting proposal for future AD/dementia trials. He points out that a large percentage of persons diagnosed with AD actually have mixed pathology, and that solving each of those pathologies with an individual agent is going to be difficult and costly. Instead, he proposes that if one were to target neural reserve — i.e., resilience against dementia — that might be a more tractable strategy.
[C]onsider neural reserve as a therapeutic endpoint. There is no evolutionary pressure to create systems that protect the brain from any brain pathology of old age, let alone different systems that offer protection from different pathologies. Thus, finding that myriad factors alter the trajectory of cognitive decline agnostic to underlying brain pathologies is expected. A hypothetical therapeutic agent that targets neural reserve could be used to offset any and likely all common brain pathologies that alter cognition.
I guess the main problem with this proposal is that it might be harder to find this sort of an all-encompassing pro-cognitive aging agent. If it’s easily available as an exogenous chemical and has a strong effect, why didn’t natural selection already sculpt our brain so that it — or its effects — would be present? I disagree that there is prima facie no evolutionary pressure for this, given the potential importance of kin selection in our evolutionary history.
But that’s just speculation and Bennett’s strategy deserves serious consideration, especially given the series of failures of AD clinical trials.
In terms of already available agents, my immediate thought is nootropics — drugs that are meant to boost one’s cognition. Some of the top nootropics that are commonly discussed are caffeine, nicotine, and modafinil. Caffeine has been shown to be protective against dementia in the epidemiologic studies you read about in the news every week. A very small trial of modafinil (n = 23) was not helpful for apathy in AD. Nicotine has been fairly widely studied in AD but its efficacy is still not clear. All of them deserve more attention, but it’s difficult to fund these trials because generic versions of all of them are now available.
Classic Paper: Elkes J, Elkes C. Effect of chlorpromazine on the behavior of chronically overactive psychotic patients. Br Med J. 1954;2(4887):560-5
In 1950, a group of anesthesiologists in France were trying to find new drugs for anesthesia. They tested the newly synthesized drug chlorpromazine on animals (dogs, rodents, and mice) and found that it led to drowsiness and indifference to aversive stimuli.
Since this was the 1950’s, they were able to quickly try it on people as a booster for anesthesia. They found that people who took chlorpromazine did not lose consciousness, but it did have a profound calming effect. Quickly people thought of trying it on patients with psychosis, for which the available treatments were very limited.
This study by Joel Elkes and Charmian Elkes, who were married, was the first to report a placebo-controlled trial on the effect chlorpromazine in psychosis. It appears that the majority of the data collecting and work was done by Charmian, rather than Joel.
They used a classic crossover study design, testing each patient on both chlorpromazine and an inert placebo (although they do not use the word “random”). They used notes written by the doctors and nurses that were blind to the treatment type to decide whether or not the patient had improved.
Of the 23 patients with a type of psychosis in their study, 7 (30%) showed “definite improvement” when they were taking the drug compared to when they were not, 11 (48%) showed “slight improvement,” and 5 (22%) showed “no improvement.”
Other interesting notes from the paper:
They describe the effect of chlorpromazine as symptomatic, since the psychosis itself did not abate: “the essentially symptomatic nature of the response has already been stressed, and cannot be overemphasized. Although affect became more subdued, and attitude and behaviour reflected this improvement, the ingrained psychotic thought disorder seemed to be unchanged.”
Because of their detailed records, they noted significant weight gain in 9/23 of the patients (in all of whom the drug led to at least a slight improvement), which has been borne out in both chlorpromazine and in the drug class in general: almost all antipsychotics result in weight gain. Of this effect, they say: “For the present we are inclined to attribute this to improved eating habit as the patients became less tense, less preoccupied, or less assaultive; though more direct metabolic effects of the drug cannot be excluded.”
They also tried it on 3 patients with senile dementia, all of whom had “no improvement.” This is yet another example of how Alzheimer’s is where drug discovery goes to die.
Notably, the mechanism remained pretty unknown until the mid-1960s, when it was shown that dopamine metabolites correlated with the chlorpromazine dose given to animals. In 1976, Seeman et al. found a nearly perfect correlation (on the log-log scale) between the ability of antipsychotic drugs to displace haloperidol from binding to the dopamine receptor and the clinical dose required for its effect.
Interestingly, you can see in this figure that chloroprazamine actually has one of the less strong dopaminergic affinities and higher doses required for controlling schizophrenia. Despite this, it and its derivatives have on to become some of the most game-changing psychiatric drugs of all time.
Shen WW. A history of antipsychotic drug development. Compr Psychiatry. 1999;40(6):407-14.
Elkes J, Elkes C. Effect of chlorpromazine on the behavior of chronically overactive psychotic patients. Br Med J. 1954;2(4887):560-5.
Bak M, Fransen A, Janssen J, Van os J, Drukker M. Almost all antipsychotics result in weight gain: a meta-analysis. PLoS ONE. 2014;9(4):e94112.
Seeman P, Lee T, Chau-wong M, Wong K. Antipsychotic drug doses and neuroleptic/dopamine receptors. Nature. 1976;261(5562):717-9.
Classic Paper: Cade JF. Lithium salts in the treatment of psychotic excitement. Med J Aust 1949; 2:349-352
Prior to 1949, treatments for mania were limited. That year, John Cade published a paper showing the usefulness of lithium in treating patients with mania (“psychotic excitement”).
Interestingly enough, the finding was apparently a surprise to Cade. He was studying guinea pigs in order to see whether uric acid added to the convulsive toxicity of urea, but he needed to find a way to make uric acid soluble in water to be able to inject it into the guinea pigs. (Confusingly enough, urea and uric acid have almost nothing to do with one another chemically.)
For this, he used the lithium salt of urate, and was surprised to find that it was protective against the urea-induced convulsions. He then injected lithium carbonate alone into guinea pigs, and noted that after a couple of hours, they became lethargic and unresponsive to stimuli.
Skipping straight from this effect in guinea pigs (not even a disease model!! — this would never be allowed today) to humans, Cade then reports on 10 cases of patients with mania who were successfully treated with lithium, including longitudinal cases of chronic mania where the mania subsided during lithium treatment and recrudesced when lithium was discontinued.
Other interesting aspects of this paper:
Cade notes that historically, water from certain wells was associated with improvements in mental illness, and speculates that “it is very likely that their supposed efficacy was a real efficacy and directly proportional to the lithium content of the waters.”
Cade notes that lithium treatment “would be much preferred” to what is usually now considered the cruel treatment of prefrontal leucotomy, even though this (1949) was the year that the Nobel prize was awarded for it, and its use continued into the mid-1950s.
All of the cases reported on were men between ages 40 and 65 years old, indicating a total lack of evidence for generalization of the effect across more diverse patient populations.
Recent meta-analysis (2013) has shown that antipsychotics are more effective than lithium in the treatment of acute mania (e.g., the standardized mean difference in manic symptoms for haloperidol is -0.56, while for lithium it is -0.37), but lithium is still often used in combination with antipsychotics in the treatment of mania.
Overall, this short paper is among the best I’ve read in terms of scientific puzzle solving, although you could argue that Cade got lucky.
Cade JF. Lithium salts in the treatment of psychotic excitement. 1949. Bull World Health Organ. 2000;78(4):518-20.
Cipriani A, Barbui C, Salanti G, et al. Comparative efficacy and acceptability of antimanic drugs in acute mania: a multiple-treatments meta-analysis. Lancet. 2011;378(9799):1306-15.
Doig MT, Heyl MG, Martin DF. Lithium and mental health. J Chem Educ. 1973;50(5):343-5.
A nice behind-the-scenes look in Noah Gray’s post discussing this article studying connections between the striatum and basal ganglia mice. Here’s what the reviewers were wondering:
[They] also raised general novelty issues, since it is well-known from many brain areas that any manipulation of circuits on a gross level can lead to innervation changes. A somewhat broad damnation, but worth considering nonetheless. This criticism also related to the next, namely that to really make a valuable contribution to the competitive field of circuit development, the authors would need to expand this study further, supplying additional data allowing a better understanding of other components within the circuit. For example, exploring how exactly the corticostriatal inputs influence basal ganglia synaptogenesis. It begged the question: do the authors understand the physiology and timing well enough to predict how their manipulations would affect the striatum, not by disrupting the striatum itself, but through control of the descending cortical circuits?
Here is an earlier BL post on MSNs in the striatum, which the authors of this article selectively silence as a part of their circuit manipulation.