A new-to-me concept is the idea of an Everest regression — “controlling for altitude, Everest is room temperature” — wherein you use a regression model to remove a critical property of an entity, and then go on to make inappropriate/confusing/misleading inferences about that entity.330px-everest_kalapatthar

My immediate thought is that this is an excellent analogy for one of my concerns regarding regressing out the effect of age in studies of Alzheimer’s disease (AD). It’s such a tricky topic.

On the one hand, not everyone who reaches advanced age develops the amyloid beta plaques and other features that defines the cluster of AD pathology. Whereas there are potentially other changes in brain biology that you will see in advanced aging but not AD, such loss of dendritic spines, epigenetic changes, and accumulation of senescent cells.

On the other hand, advanced age is the most important risk factor for AD and explains most of the variance in disease status on a population basis. Arguably, a key part of why some “oldest old” folks do not have AD are protective factors. There have also been suggestions that accelerating aging is part of AD pathophysiology; although, as far as I can tell, the evidence for this remains preliminary. From this perspective, advanced age in AD is like the high altitude of Everest — it’s one of the key associated features.

So if you are trying to find the effects of AD pathophysiology, for example in a study of postmortem human brain samples, should you adjust for the effect of age or not? This is a practical and tricky question without a clear answer. It probably depends on your underlying model of how AD develops in the first place.

So I think it’s worthwhile to be cognizant of the potential hazards of adjusting for age — namely, that you risk inadvertently performing an Everest regression and removing an important chunk of the pathophysiology that you actually want to understand.


One of the most remarkable findings in aging over the past decade is that it’s possible to track the rate of aging based on stereotyped DNA methylation changes across a diverse set of tissues. These are known as epigenetic clocks.

But as anyone in the gene expression field knows, changes in the levels of epigenetic markers between groups (like young vs older) is confounded by cell type proportion differences between those groups.

This cell type proportion confound makes it harder to tell whether the changes in DNA methylation are truly a marker of aging or whether they are due to cell type proportion variations that may be already known to occur during aging, like naive T cell depletion due to thymus atrophy.

Single cell epigenetics has the potential to address this problem. By measuring DNA methylation patterns within individual cells, you can compare the epigenetic patterns within the same cell type between groups, and don’t have to worry (as much) about overall changes in cell type proportion [1].

I was interested to see whether anyone has used single cell epigenetic profiling, which was just come out within the past couple of years, to measure whether changes in epigenetic marks can be seen within single cells during aging.

First, let’s back up a second and talk about epigenetics. Two of the major factors that defines a cell’s epigenome are its DNA methylation patterns and its histone post-translational modifications.

DNA methylation has been studied a bit in single cells. One study looked at DNA methylation in hepatocytes and didn’t find many differences between old and young cells.

However, as a recent review points out, single cell DNA methylation data are currently limited because of sample quantity within each cell, and can’t easily compare methylation patterns between different cells in the same region of the genome.

On the histone modification front, I found a nice article by Cheung et al 2018, who measured histone post-translational modifications (PTMs) in single cells derived from blood samples. They found that in aging, there was increased variability histone PTMs both between individuals and between cells.

So, in summary, here are some future directions for this research field that it would be prudent to keep an eye one:

  1. How much of the changes in DNA methylation seen in aging are due to changes in relative cell type proportions as opposed to changes within single cells? If we assume that age-related changes in DNA methylation will be similar to age-related changes in histone PTMs, then Cheung et al.’s results suggest that the changes in DNA methylation are probably due to true changes within single cells during aging.
  2. Is there a way to slow or reverse age-related changes in DNA methylation or histone PTMs, perhaps targeted to stem cell populations? It’s not clear that this can be done in a practical way, especially if age-related changes are driven primarily by an increase in variability/entropy.
  3. If it is possible to slow or reverse DNA methylation or histone PTMs, would that help to slow aging and thus “square the curve” of age-related disease? Aging might be too multifactorial for a single intervention like this to make a major difference, though.

[1]: I add “as much” here because differential expression analysis in single cell data is far from straightforward, and e.g. has the potential to be biased by subtle differences in the distribution of sub-cell type spectrum between groups.

One of the common considerations when prescribing haloperidol is whether it will prolong the QT interval. This is a measure of the heart rhythm on the EKG that correlates with one’s risk for serious arrhythmias such as torsades de pointes.


Earlier this year, van den Boogaard et al published one of the largest RCTs to compare haloperidol against placebo (700+ people in both groups).

Their main finding was that prophylactic haloperidol was not helpful for reducing the rate of delirium or improving mortality.

But one of their most interesting results was the safety data. This showed that their dose of haloperidol had no effect on the QT interval and caused no increased rates of extrapyramidal symptoms. Their regimen was haloperidol IV 2 mg every 8 hours, which is equivalent to ~ 10 mg oral haloperidol in one day.

The maximum QT interval was 465 ms in the 2 mg haloperidol group and 463 ms in the placebo group, a non-significant difference with a 95% CI for the difference of -2.0 to 5.0.

Notably, they excluded people with acute neurologic conditions (who may have been more likely to have cardiovascular problems) and people with QTc already > 500 ms, which makes generalization of this finding to those groups a bit tricky.

Since I did the same analysis for antidepressants yesterday, I figured that I would analyze the receptor binding profiles of antipsychotics today. Here is a visualization:

Screen Shot 2018-08-08 at 6.24.27 PM

And here is a dendrogram based on a clustering of those receptor affinities:

Screen Shot 2018-08-08 at 6.20.47 PM

It turns out that it’s much harder to see a trend in which these classes cluster based on chemical structure like the antidepressants did, but perhaps you will be able to notice some trends:

Here’s my code to reproduce this.

As I’m trying to learn more about antidepressants, I found it interesting to make a visualization of the receptor binding profiles of some of the better characterized ones, so I thought I would post it here.

Antidepressant receptor binding

Some of these medications aren’t widely used anymore or were never pursued for development, so they are also a window into the history of psychiatry and what could have been. This is how the meds cluster based on their receptor binding:

Screen Shot 2018-08-07 at 9.36.36 PM

One interesting thing about these clusters is that they cut the medications into groups distinguished by their chemical/drug classes:

  • Group #1: TeCAs like mirtazapine and one TCA, doxepin; o
  • Group #2: TCAs like amitryptiline and one TeCA, amoxapine
  • Group #3: SSRIs/SNRIs, like fluoxetine and venlafaxine
  • Group #4: Phenylpiperazines, like trazodone
  • Group #5: NRIs/NDRIs, like atomoxetine and buproprion

Here’s my code to reproduce this.



As one of my manifestations of intellectual contrarianism, I like to collect historical examples of times when a largish group of scientists thought that a complicated theory was the best way to explain a set of facts, but then a more simple explanation turned out to be much better.

I especially like examples of this in neuroscience, where people are wont to postulate complicated theories about the way that we think.

There is perhaps no better example than the debate between the reticular theory of the nervous system and the neuron doctrine.

The reticular theory postulated a form of exceptionalism in the nervous system: that axons and dendrites seen on light microscopy were not attached to cells but were in fact a separate, non-cellular entity, forming their own protoplasmic network.

The neuron doctrine is, at least in hindsight, much simpler, postulating that axons and dendrites are extensions of cells, as occurs in other types of biology.


Cajal’s drawing of neurons in the chick cerebellum, from Wikipedia

The reticular theory had many proponents, including Camillo Golgi and Franz Nissl, and lasted from 1840-1935. It’s easy to dismiss it now, but it was a reasonable idea at the time.

Now, though, it’s an good example of how theories that postulate that the brain is extremely complicated and different than other types of biology do not have a good track record.

In the past 20 years, deep brain stimulation (DBS) has been used for over 100,000 patients with Parkinson’s disease. The success of this procedure has led investigators to try DBS for other neurologic conditions, such as Alzheimer’s disease (AD).

In 2016, Lozano et al reported on one of the largest trials for DBS in AD, the “ADvance” trial, in which they targeted the fornix, a bundle of nerve fibers in the center of the brain that is the major output tract of the hippocampus.


This was a well-run, double-blind, randomized study. One of the nice aspects about brain stimulation trials is the ease of performing a sham stimulation arm. That is, treatment can be randomly turned either “on” and “off” for a period of time, allowing a subset of participants to serve as controls (stimulation turned “off”) for a period of time before they actually do get the stimulation (stimulation turned “on”) in case it is actually helpful.

In terms of the trial results, one of the patients (out of 42) had an implant infection. Overall, the trial did not show a significant benefit mitigating the decline in ADAS-13 or CDR-SB scores (measures of cognitive function):

Screen Shot 2017-11-24 at 8.11.30 AM

Lozano et al 2016; doi: 10.3233/JAD-160017

While this trial did not show efficacy at their sample sizes, personally I expect that DBS for early AD could work to at least alleviate symptoms, if the right circuits were targeted at the right time.

My reasoning here is that we know that a few other cognitive strategies can help slow the course of AD, including processing speed training and acetylcholinesterase inhibitors.

There are at least 4 active DBS trials for AD on clinicaltrials.gov:

It will be interesting to monitor this growing field in the coming years.