Feeds:
Posts
Comments

Immersion fixation of a human brain is fairly slow. One study found that it took an average of 32 days for single brain hemispheres immersion fixed in 10% formalin to be fully fixed (as proxied by achieving the minimum T2 value).

This means that fixative won’t reach the tissue in the innermost regions of the brain until a substantial amount of tissue disintegration has already occurred.

Here are a few approaches to speed up immersion fixation in brain banking protocols. For each approach, I’m also going to list a rough, completely arbitrary estimated probability that they would each actually speed up the fixation process, as well as some potential downsides of each.

1) Cutting the brain into smaller tissue sections prior to immersion fixation. This approach is the most common approach already used to speed up immersion fixation. It relies on the obvious idea that if you directly expose more of the tissue to the fixative, the process of fixation will finish faster. I list it here for completeness.

Probability of speeding up immersion fixation: Already demonstrated.

Downsides: Damage at the cut interfaces, difficulty in inferring how cellular processes correspond between segments, mechanical deformation, technical difficulty in cutting fresh brain tissue in a precise manner.

2) Using higher concentrations of fixative. This makes biophysical sense according to Fick’s law of diffusion, as a higher concentration gradient of fixative should increase its rate of diffusion into the tissue. One study found that 10% formalin led to a faster fixation rate in pig hearts, at least at the longest time interval studied (168 hours):

Screen Shot 2019-06-25 at 9.26.00 PM

Holda et al 2017; PMID: 29337978; FBPS = Formaldehyde phosphate buffered solution

If 10% is faster than 2% or 4%, then 100% formalin would likely be faster than 10%.

Probability of speeding up immersion fixation with 50-100% compared to 10% formalin: 95%

Downsides: 100% formalin could produce more toxic fumes, it is likely more expensive, and it is not as easily accessible. It could also lead to more overfixation (e.g. antigen masking) of outer surface regions, although it theoretically could reach parity on this measure if a shorter amount of time were used for the fixation.

3) Using the cerebral ventricles as an additional source of fixative immersion.

Screen Shot 2019-06-25 at 9.40.06 PM

If you can access the ventricles of the brain with a catheter or some other device, you could allow fixative to diffuse into the ventricles. This would allow for a substantially increased surface area from which fixatives can diffuse.

Because the cerebral ventricles are already there, using them allows for some of the advantages of the dissection approach without having to cut the brain tissue (other than the tissue damaged when placing the catheter(s) into the ventricles).

This approach can also be used when the brain is still inside of the skull, via the use of cranial shunts.

Access to the lateral ventricle is likely part of why immersion fixation is much faster after hemisecting the brain, which is already commonly done in brain banking protocols. 

Probability of speeding up immersion fixation: 50%. There are plenty of unknowns here. For example, are the ventricles already accessed through the cerebral aqueduct or canal when the brain is removed through the skull in standard immersion fixation? Do the ventricles collapse ex vivo or when the brain is taken out of the skull, rendering the approach much less effective? The uncertainty here should be attributed to my own ignorance of this literature, as other people are likely aware of the answers.

Downsides: Damage to parenchyma where the catheters are inserted, increased complexity of the procedure.

4) Using glyoxal or another fixative as a complementary agent. This is a pie-in-the-sky idea, but what about using a fixative other than formaldehyde? Glyoxal is one possibility. It has potential as an alternative fixative in terms of morphology preservation, and while it doesn’t seem to be quite as efficient a crosslinker as glutaraldehyde, it might diffuse faster because it is smaller. I haven’t been able to find good diffusion time measurements for glyoxal after a brief search. Glyoxal is also likely less toxic than formaldehyde.

Screen Shot 2019-06-25 at 9.57.16 PM

Why use it at all if it likely diffuses slower than formaldehyde? It’s not all about how quickly a fixative agent reaches the target tissue, but how efficiently it crosslinks once it gets there that is necessary to stop disintegration and stabilize the tissue. Glyoxal is the smallest dialdehyde so it might be a bit of a Goldilocks in the crosslinking efficiency vs diffusion speed trade-off. But, again, this is pie-in-the-sky and would need actual testing.

Probability of speeding up immersion fixation: 10% with glyoxal, 90% with some other fixative or combination of fixatives. It seems unlikely — but possible — that the first fixative ever used would just happen to be the best at immersion fixation of large tissue blocks.

Downsides: Other fixatives will likely be more expensive, less accessible, and cause artifacts that are harder to adjust for than the well-known ones caused by formaldehyde.

5) Ultrasound-enhanced delivery. Ultrasound has been shown to increase the speed of fixation in tissue blocks. One study found that ultrasound increased delivery speed of non-fixative chemicals (at the end of a catheter) by 2-3x. The mechanism is unknown, but could involve heat, which is already known to increase diffusion speed (not ideal, as this would also likely increase tissue degradation), and/or acoustic cavitation, a concept that I don’t fully understand, but which can apparently speed liquid diffusion directly.

Probability of speeding up immersion fixation: 50%. I’d like to see these studies done on more brain tissue and for them to be replicated. However, they are pretty promising.

Downsides: Ultrasound might itself damage cellular morphology and/or biomolecules. However, considering that ultrasound has also been used in vivo, eg for opening the blood-brain barrier, it is unlikely to cause too much damage to tissue ex vivo, at least when using the right parameter settings.

6) Convection-enhanced delivery. This technique, which has primarily been used in neurosurgery, involves inserting catheters into brain parenchyma in order to help distribute chemicals such as chemotherapeutic agents. There’s no reason why this couldn’t be leveraged for brain banking as well.

Certain areas of the brain, perhaps the innermost ones that would otherwise take forever to be fixed, could be chosen to have small catheters inserted, allowing local delivery of fixative.

This would allow for an increase in the “effective surface area” of the fixative while minimizing damage due to sectioning and allowing the brain to remain intact.

Probability of speeding up immersion fixation: 99%. It’s hard to see how using convective-enhanced delivery of fixatives with catheters inside of the brain parenchyma wouldn’t speed up immersion fixation, but since I’m not aware of studies on it, there may be some technical difficulties that I’m not recognizing.

Downsides: Damage to the brain tissue from inserting the catheters, potential build-up of fluid pockets of fixative near the catheter tip that could damage nearby tissue if the infusion rate is too high, increased complexity, cost, and time for the procedure.

Advertisements

A new-to-me concept is the idea of an Everest regression — “controlling for altitude, Everest is room temperature” — wherein you use a regression model to remove a critical property of an entity, and then go on to make inappropriate/confusing/misleading inferences about that entity.330px-everest_kalapatthar

My immediate thought is that this is an excellent analogy for one of my concerns regarding regressing out the effect of age in studies of Alzheimer’s disease (AD). It’s such a tricky topic.

On the one hand, not everyone who reaches advanced age develops the amyloid beta plaques and other features that defines the cluster of AD pathology. Whereas there are potentially other changes in brain biology that you will see in advanced aging but not AD, such loss of dendritic spines, epigenetic changes, and accumulation of senescent cells.

On the other hand, advanced age is the most important risk factor for AD and explains most of the variance in disease status on a population basis. Arguably, a key part of why some “oldest old” folks do not have AD are protective factors. There have also been suggestions that accelerating aging is part of AD pathophysiology; although, as far as I can tell, the evidence for this remains preliminary. From this perspective, advanced age in AD is like the high altitude of Everest — it’s one of the key associated features.

So if you are trying to find the effects of AD pathophysiology, for example in a study of postmortem human brain samples, should you adjust for the effect of age or not? This is a practical and tricky question without a clear answer. It probably depends on your underlying model of how AD develops in the first place.

So I think it’s worthwhile to be cognizant of the potential hazards of adjusting for age — namely, that you risk inadvertently performing an Everest regression and removing an important chunk of the pathophysiology that you actually want to understand.

One of the most remarkable findings in aging over the past decade is that it’s possible to track the rate of aging based on stereotyped DNA methylation changes across a diverse set of tissues. These are known as epigenetic clocks.

But as anyone in the gene expression field knows, changes in the levels of epigenetic markers between groups (like young vs older) is confounded by cell type proportion differences between those groups.

This cell type proportion confound makes it harder to tell whether the changes in DNA methylation are truly a marker of aging or whether they are due to cell type proportion variations that may be already known to occur during aging, like naive T cell depletion due to thymus atrophy.

Single cell epigenetics has the potential to address this problem. By measuring DNA methylation patterns within individual cells, you can compare the epigenetic patterns within the same cell type between groups, and don’t have to worry (as much) about overall changes in cell type proportion [1].

I was interested to see whether anyone has used single cell epigenetic profiling, which was just come out within the past couple of years, to measure whether changes in epigenetic marks can be seen within single cells during aging.

First, let’s back up a second and talk about epigenetics. Two of the major factors that defines a cell’s epigenome are its DNA methylation patterns and its histone post-translational modifications.

DNA methylation has been studied a bit in single cells. One study looked at DNA methylation in hepatocytes and didn’t find many differences between old and young cells.

However, as a recent review points out, single cell DNA methylation data are currently limited because of sample quantity within each cell, and can’t easily compare methylation patterns between different cells in the same region of the genome.

On the histone modification front, I found a nice article by Cheung et al 2018, who measured histone post-translational modifications (PTMs) in single cells derived from blood samples. They found that in aging, there was increased variability histone PTMs both between individuals and between cells.

So, in summary, here are some future directions for this research field that it would be prudent to keep an eye one:

  1. How much of the changes in DNA methylation seen in aging are due to changes in relative cell type proportions as opposed to changes within single cells? If we assume that age-related changes in DNA methylation will be similar to age-related changes in histone PTMs, then Cheung et al.’s results suggest that the changes in DNA methylation are probably due to true changes within single cells during aging.
  2. Is there a way to slow or reverse age-related changes in DNA methylation or histone PTMs, perhaps targeted to stem cell populations? It’s not clear that this can be done in a practical way, especially if age-related changes are driven primarily by an increase in variability/entropy.
  3. If it is possible to slow or reverse DNA methylation or histone PTMs, would that help to slow aging and thus “square the curve” of age-related disease? Aging might be too multifactorial for a single intervention like this to make a major difference, though.

[1]: I add “as much” here because differential expression analysis in single cell data is far from straightforward, and e.g. has the potential to be biased by subtle differences in the distribution of sub-cell type spectrum between groups.

One of the common considerations when prescribing haloperidol is whether it will prolong the QT interval. This is a measure of the heart rhythm on the EKG that correlates with one’s risk for serious arrhythmias such as torsades de pointes.

download

Earlier this year, van den Boogaard et al published one of the largest RCTs to compare haloperidol against placebo (700+ people in both groups).

Their main finding was that prophylactic haloperidol was not helpful for reducing the rate of delirium or improving mortality.

But one of their most interesting results was the safety data. This showed that their dose of haloperidol had no effect on the QT interval and caused no increased rates of extrapyramidal symptoms. Their regimen was haloperidol IV 2 mg every 8 hours, which is equivalent to ~ 10 mg oral haloperidol in one day.

The maximum QT interval was 465 ms in the 2 mg haloperidol group and 463 ms in the placebo group, a non-significant difference with a 95% CI for the difference of -2.0 to 5.0.

Notably, they excluded people with acute neurologic conditions (who may have been more likely to have cardiovascular problems) and people with QTc already > 500 ms, which makes generalization of this finding to those groups a bit tricky.

Since I did the same analysis for antidepressants yesterday, I figured that I would analyze the receptor binding profiles of antipsychotics today. Here is a visualization:

Screen Shot 2018-08-08 at 6.24.27 PM

And here is a dendrogram based on a clustering of those receptor affinities:

Screen Shot 2018-08-08 at 6.20.47 PM

It turns out that it’s much harder to see a trend in which these classes cluster based on chemical structure like the antidepressants did, but perhaps you will be able to notice some trends:

Here’s my code to reproduce this.

As I’m trying to learn more about antidepressants, I found it interesting to make a visualization of the receptor binding profiles of some of the better characterized ones, so I thought I would post it here.

Antidepressant receptor binding

Some of these medications aren’t widely used anymore or were never pursued for development, so they are also a window into the history of psychiatry and what could have been. This is how the meds cluster based on their receptor binding:

Screen Shot 2018-08-07 at 9.36.36 PM

One interesting thing about these clusters is that they cut the medications into groups distinguished by their chemical/drug classes:

  • Group #1: TeCAs like mirtazapine and one TCA, doxepin; o
  • Group #2: TCAs like amitryptiline and one TeCA, amoxapine
  • Group #3: SSRIs/SNRIs, like fluoxetine and venlafaxine
  • Group #4: Phenylpiperazines, like trazodone
  • Group #5: NRIs/NDRIs, like atomoxetine and buproprion

Here’s my code to reproduce this.

 

 

As one of my manifestations of intellectual contrarianism, I like to collect historical examples of times when a largish group of scientists thought that a complicated theory was the best way to explain a set of facts, but then a more simple explanation turned out to be much better.

I especially like examples of this in neuroscience, where people are wont to postulate complicated theories about the way that we think.

There is perhaps no better example than the debate between the reticular theory of the nervous system and the neuron doctrine.

The reticular theory postulated a form of exceptionalism in the nervous system: that axons and dendrites seen on light microscopy were not attached to cells but were in fact a separate, non-cellular entity, forming their own protoplasmic network.

The neuron doctrine is, at least in hindsight, much simpler, postulating that axons and dendrites are extensions of cells, as occurs in other types of biology.

cajalcerebellum.jpg

Cajal’s drawing of neurons in the chick cerebellum, from Wikipedia

The reticular theory had many proponents, including Camillo Golgi and Franz Nissl, and lasted from 1840-1935. It’s easy to dismiss it now, but it was a reasonable idea at the time.

Now, though, it’s an good example of how theories that postulate that the brain is extremely complicated and different than other types of biology do not have a good track record.