Ketamine part 2: What do in vitro studies tell us about safety?

Ketamine is an anesthetic with growing popularity as an antidepressant. As an antidepressant, it’s quite impressive. When it works, it’s often within hours- a huge improvement over giving a suicidal person a pill that might work 6 weeks from now. And it has a decent success rate in people who have been failed by several other antidepressants and are thus most in need of a new option. 

The initial antidepressant protocols for ketamine called for a handful of uses over a few weeks. Scott Alexander judged this safe, and I’m going to take that as a given for this post. However, new protocols are popping up for indefinite, daily use. Lots of medications are safe or worth it for eight uses, but harmful and dangerous when used chronically. Are these new continuous use protocols safe?

That’s a complicated question that will require several blog posts to cover. For this post, I focused on what academic studies of test tube neurons could tell us about cognitive damage, because I know which organ my readers care about the most. 

Reasons to doubt my results

First off, my credentials are a BA in a different part of biology and a willingness to publish. In any sane world, I would not be a competitive source of analysis. 

My conclusions are based on 6 papers studying neurons in test tubes, 1.5 of which disagreed with the others. In vitro studies have a number of limitations. At best, they test the effect on one type of cell, in a bed of the same type of cell. If any part of the effect of ketamine routes through other cells (e.g. it might hypothetically activate immune cells that damage the focal cell type), in vitro studies will miss that. It will also miss positive interactions- e.g., it looks like ketamine does stress cells out somewhat, in ways your body has protocols to handle. If this effect is dominant, in vitro would make ketamine look more harmful than it is in practice.

And of course, there’s no way to directly translate in vitro effects directly into real world effects. If ketamine costs you 0.5% of your brain cells, what would that do to you? What would 5% do? In vitro studies don’t tell us that.

All studies involved a single exposure of cells to ketamine (lasting up to 72 hours). If there are problems that come from repeated use rather than total consumption, in vitro can’t show it. However, I consider it far more likely that repeated exposures are much safer than receiving the same dose all at once.

Lastly, in vitro spares the ketamine from any processing done by the liver, which means you are testing only* ketamine and not its byproducts (with the exception of one paper, which also looked at hydronorketamine and found positive results).

[*Processing in neurons might not be literally zero, but is small enough to treat it as such for our purposes]

Tl;dr

I will describe each of the papers in detail, but let me give you the highlights.

Of 6 papers (start of paper names in parentheses):

  • 2 found neutral to positive effects at doses higher than you will ever take
    • Highest dose with no ill effect:
      • 2000uM for 24 hours (Ketamine incudes…)
      • 500uM for 24 hours (but 100uM had positive effects) (Ketamine causes…)
  • 2 found neutral to positive effects at dose you might achieve, but either didn’t test higher doses or found negative effects
    • 1 uM for 72 hours (Ketamine increases…) but 0.5uM for 24 hours was better
    • 1 uM for 24 h (nothing higher tested) ((2r,6r)…)
  • 1 found no cellular mortality from ketamine on its own, but that it mitigated the effect of certain inflammatory molecules that would otherwise kill cells (ketamine prevents…)
  • 1 found that ketamine killed cells at a dose you might take.
    • Lowest dose tested: 0.39uM for 24 hours (The effects of ketamine…). This is far less than where other papers found positive effects, and I’m not sure why. 
    • This paper goes out of its way to associate ketamine with date rape in the first paragraph, which is both irrelevant and unfounded, so maybe the authors’ have a  negative bias.
    • On the other hand, it calls cell mortality of up to 30% “relatively low toxic outcomes”, which sounds excessively positive. 

For calibration: I previously estimated that a 100mg troche leads to a median peak concentration of less than 0.46uM, and a total dose of less than 2.8uM*h (to calculate total dose for each of the above papers, just multiply the concentration given by the time given. 1 uM for 24 hours = 24uM*h).  

By positive effects, I mean one of two things: ketamine-exposed cells grew bigger and grew more connections with other cells; or, ketamine-exposed test tubes had more cells than their controls, which could mean cells multiplied, or that ketamine slowed cell death (one paper examined these separately, and the answer seemed to be “both.”). This appears to happen because ketamine stimulates multiple cell-growth triggers, such as upregulating BDNF and the mTOR pathway. 

The primary negative effect is cell death, which stems from an increase in reactive oxidative species (you know how you’re supposed to eat blueberries because they contain antioxidants? ROS is the problem antioxidants solve). Unclear if there are other pathways to damage.  

It doesn’t surprise me at all that two contradictory effects are in play at the same time. In fact, it wouldn’t surprise me if the positive effects were triggered by the negative- it’s not unusual in biology for small amounts of stress to trigger coping mechanisms that do more good than the stress did harm. For example, exercise also produces reactive oxidative species. Approximately everything real that helps with “detoxification” does so by annoying the liver into producing helpful enzymes.

The most actionable thing this post could do is give you the “safe” dose of ketamine”. Unfortunately, it’s hard to translate the research to a practical dose. There are several factors that might matter for the damage done by a drug:

  • The peak concentration (generally measured in µM = µmol/L, or ng/ml, which is uM*238)
  • The total exposure of cells to the drug, measured in concentration*time
  • How much your body is able to repair the damage from a drug. This will generally be higher when the total exposure is divvied up into many small exposures instead of one large one.
    • Alas, every study in existence uses a single large dose. But it probably doesn’t matter, because neurons in isolation are missing some antioxidative tools and the ability to replenish what they do have. So I’m left to guess how much safer divided doses are, relative to the single large dose used in every paper. 

In an ideal world, the test tube neurons would be given ketamine in a way that mimics both the peak concentration and the total exposure. No one is even trying to make this happen. For most ways you’d receive ketamine for depression, you get an immediate burst followed by a slow decline. But for in vitro work, they dump a whole bunch of ketamine in the tube and let it sit there. Without the liver to break the ketamine down, the area under the curve is basically a rectangle. This makes it absolutely impossible for a test tube to replicate both the peak concentration and the total dose a human would be exposed to. The curves just look different. 

[Statistical analysis by MS Paint]

In practice, and excluding the one negative paper, an antidepressant dose is rarely if ever going to reach the peak dose given to the test tube. You’re also not going to have anywhere near that long an exposure. My SUPER ASS PULLED, COMPLETELY UNCREDENTIALED, IN-VITRO ONLY, NOT MEDICAL ADVICE guess is that unless you are unlucky or weigh very little, a 100mg troche of ketamine will not do more oxidative damage than your body is able to repair in a few days (this excludes the risk of damage from byproducts).

There’s a separate concern about tolerance and acclimation, which none of these papers looked at, that I can’t speak to. 

Papers

Warning: most of these papers used bad statistics and they should feel bad. Generally, when you have multiple treatment arms in a study that receive the same substance at different doses, you do a dose-response analysis- in essence, looking for a function that takes the dose as an input and outputs an effect. This lets you detect trends and see significance earlier. 

What these papers did instead was treat each dose as if it were a separate substance and evaluate their effects relative to the control individually. This faces a higher burden to reach statistical significance. 

Because of this, if there’s an obvious trend in results, and the difference from control is significant at higher levels, I report each individual result as if it’s statistically significant. I haven’t redone the math and don’t know for sure what the statistical significance of the effect is. 

Ketamine Increases Proliferation of Human iPSC-Derived Neuronal Progenitor Cells via Insulin-Like Growth Factor 2 and Independent of the NMDA Receptor

Dose: 0.5uM to 10 uM

Exposure time: 24-72 hours

Interval between end of exposure and collection of final metric: 72 hours

As you can see in the bar chart, ketamine was either neutral or positive. However, less ketamine was more beneficial- there was more growth at the lowest dose of ketamine than the highest (which was indistinguishable from control), and more growth after a 24 hour exposure than a 72 hour exposure. 

To achieve the same peak dosages, you’d need ~100 – 2000mg sublingual ketamine. To achieve the same total exposure, you’d need 430 doses of 100mg ketamine at the low end (for 0.5mM for 24 hours) to 24,000 doses at the high end (10uM for 72 hours) (assuming linearity of dose and concentration, which is probably false).

The following isn’t relevant, but it is interesting: researchers also applied ketamine to some standard laboratory cells (the famous HeLa line, which is descended from cervical cancer) and found it did not speed up cell proliferation- meaning ketamine isn’t an indiscriminate growth factor, but targets certain kinds of cells, including (some?) neurons.

Ketamine Induces Toxicity in Human Neurons Differentiated from Embryonic Stem Cells via Mitochondrial Apoptosis Pathway

Dose: 20uM to 4000 uM

Exposure time: 6-24 hours

Interval between end of exposure and collection of final metric: 72 hours

Don’t let the title scare you- the toxicity was induced at truly superhuman doses (24 hours at 100x what they call the anesthetic dose, which is already 3x what another paper considered to be the anesthetic dose)

Calibration: the lowest dose of ketamine (20 uM) given corresponds to 40x my estimate for median peak concentration after a 100mg troche. 

Chart crimes: that X-axis is neither linear nor any other sensible function. 

Chart crimes aside, I’m in love with this graph. By testing a multitude of doses at three different lengths of exposure, it demonstrates 6 things:

  • Total exposure matters- a constant concentration of 300uM showed negative effects at 12 hours but not 6. 
  • After a threshold is reached, the negative effect of total dose is linear or mildly sublinear. 
  • Peak dosage matters separate from total dose. If that weren’t true, doubling the dose would halve the time it took to show toxicity. 
  • Doubling exposure time is roughly equivalent to increasing dosage by 500uM
  • At lower doses, longer exposure is correlated with greater cell survival/proliferation. But at higher doses, longer exposure is correlated with lower viability. 
  • The lowest total exposure required to see negative effects is 21,000uM*h, which would require ~17,500 100mg troches- or one every day for 48 years. Before accounting for any repair mechanisms.  

The paper spends a great deal of time asking why ketamine is toxic at high doses, focusing on reactive oxidative species (ROS) (the thing that blueberries fight). This suggests that your body’s antioxidant system likely reduces the damage compared to what we see in test tubes. Unfortunately I don’t know how to translate the dosage of Trolox, their anti-oxidant, to blueberries.

(2R,6R)-Hydroxynorketamine promotes dendrite outgrowth in human inducible pluripotent stem cell-derived neurons through AMPA receptor with timing and exposure compatible with ketamine infusion pharmacokinetics in humans

Dose: 1 uM

Exposure time: 1-6 hours

Interval between end of exposure and collection of final metric: 60 days (!)

Outcome: synaptic growth

Most of the papers I looked at created their neurons from a stem cell line and then briefly ages This paper stands out for aging the cells for a full 60 days before exposing them to 1 uM ketamine (they also tried hydronorketamine, a byproduct of ketamine metabolism. I’ll be ignoring this, but if you see “HNK” on the graph, that’s what it means).

Here we see that ketamine and its derivative significantly increased the number and length of dendrites (the connections between neurons). It’s possible to have too much of this, but in the moderate amounts shown this should lead to an easier time learning and updating patterns. 

Ketamine Causes Mitochondrial Dysfunction in Human Induced Pluripotent Stem Cell-Derived Neurons

Dose: 20-500 uM

Exposure time: 6-24 hours

Interval between end of exposure and collection of final metric: 0 hours?

This is another paper with a scary title that is not born out by its graphs. It did find evidence of cellular stress (although this is only unequivocal at higher doses), but cell viability was actually higher at lower doses and unchanged at higher doses, and by lower dose I mean “still way more than you’re going to take for depression”

20uM = a higher peak than you will ever experience even under anaesthetic. 20uM for 6 hours has the same cumulative exposure of 42 100mg troches. 100uM for 24 hours has the same cumulative exposure as 142 100mg troches (for the median person)

Capsace 3/7 and ROS luminescence are both measures of oxidative stress (the thing blueberries fight). Cell viability is what it sounds like. There was no statistically significant difference in viability, and eyeballing the graph it looks viability increases with dosage until at least 100uM.

Ketamine Prevents Inflammation-Induced Reduction of Human Hippocampal Neurogenesis via Inhibiting the Production of Neurotoxic Metabolites of the Kynurenine Pathway

Dose: 0.4 uM

Exposure time: 72 hours

Interval between end of exposure and collection of final metric: 7 days

Exposed cells to neurotoxic cytokines, two forms of ketamine, and two other antidepressants, alone and in combination. The dose of ketamine was 400nM or 0.4uM, which is about 1 100mg sublingual troche.

These graphs are not great. 

DCX is a signal of new cell growth (good), CC3 is a sign of cell death (bad), Map2 is a marker for mature neurons (good). In general, whatever the change is between the control and IL-* (the second entry on the X axis) is bad, and you want treatments to go the other way. And what we see is the ketamine treatments are about equivalent to the control. .

The effects of ketamine on viability, primary DNA damage, and oxidative stress parameters in HepG2 and SH-SY5Y cells (the negative one)

Dose: 0.39-100 uM

Exposure time: 24 hours

Interval between end of exposure and collection of final metric: 0 hours?

(Pink cells are neurons derived from a neurological cancer cell line, Blue cells are liver cells derived from a liver cancer cell line. The red boxes correspond to their estimate of the painkilling, anesthetic, and drug abuse levels of concentration. All conditions were exposed for 24 hours. & means P<0.05; #, P<0.01; $, P<0.001; *, P<0.0001)

This paper found a 20% drop in neuron viability for anesthetic doses of ketamine, and 5% for a painkilling dose (and a milder loss to liver cells). This is compared to an untreated control that lost <1% of cells. They describe this result as “low cytotoxicity” for ketamine. I am confused by this and wonder if they had some pressure to come up with a positive result. On the other hand, the paper’s opening paragraph contains an out-of-left-field accusation that ketamine is a common ingredient in date rape pills, which is irrelevant, makes no sense, and is given no passable justification*, which makes me think at least one author thinks poorly of the chemical. So perhaps I’m merely showing my lack of subject matter expertise, and 20% losses in vitro don’t indicate anything worrisome. 

[*They do give a citation, but neither that paper nor the ones it cites offer any reason to believe ketamine is frequently used in sexual assault, just passing mentions. In the anti column we have the facts that ketamine tastes terrible and is poorly absorbed orally, requiring large doses to incapacitate someone. It’s a bad choice to give surreptitiously. Although I wouldn’t doubt that taking ketamine voluntarily makes one more vulnerable.]

Why In Vitro Studies?

Given all their limitations, why did I focus exclusively on in vitro studies? Well, when it comes to studying drug use, you have four choices:

  1. Humans given a controlled dose in a laboratory.
    1. All the studies I found were short-term, often only single use, and measured brain damage with cognitive tests. Combine with a small sample size and you have a measurement apparatus that can only catch very large problems. It’s nice to know those don’t happen (quickly), but I also care about small problems that accumulate over time.
  2. Humans who took large amounts of a substance for a long time on their own initiative.
    1. This group starts out suffering from selection effects and then compounds the problem by applying their can-do attitude towards a wide variety of recreational pharmaceuticals, making it impossible to untangle what damage came from ketamine vs opiates. In the particular case of ketamine, they also do a lot of ketamine, as much as 1-4 grams per day (crudely equivalent to 11-44 uM*h if they take it nasally, and 2-4x that if injected)
    2. I initially thought that contamination of street drugs with harmful substances would also be a big deal. However, a friend directed me to DrugsData.org, a service that until 2024 accepted samples of street drugs and published their chemical makeup. Ketamine was occasionally used as an adulterant, but substances sold under the name ketamine rarely contained much beyond actual ketamine and its byproducts.
  3. Animal studies. I initially dismissed these after I learned that ketamine was not used in isolation in rats and mice (the subjects of almost every animal paper), only in combination with other drugs. However, while writing this up, I learned that this may be due to a difference in use case, rather than a difference in response to ketamine. But when I looked at the available literature there were 6 papers, every one of which gave the rats at least an order of magnitude more ketamine than a person would ever take for depression.
    1. For the curious, here’s why ketamine usage differs in animals and humans:
      1. Ketamine is cheap, both as a substance and because it doesn’t require an anesthesiologist. In some jurisdictions, it doesn’t even require a doctor. Animal work is more cost-conscious and less outcome-conscious, so it’s tilted towards the cheaper anesthetic. 
      2. New anesthetics for humans aren’t necessarily tested in animals, so veterinarians have fewer options. 
      3. Doctors are very concerned that patients hate their experience not get addicted to medications, and ketamine can be enjoyable (although not physically addictive). Veterinarians are secure that even if your cat trips balls and spends the next six months jonesing, she will not have the power to do anything about it.

        1. Pictured: a cat whose dealer won’t return her texts.
      4. ketamine is rare in that it acts as an anesthetic but not a muscle relaxant. For most surgeries, you want relaxed muscles, so you either combine the ketamine with a muscle relaxant or use another drug entirely. However there are some patients where relaxing muscles is dangerous (generally those with impaired breathing or blood pressure), in which case ketamine is your best option. 
      5. Ketamine is unusually well suited for emergency use, because it acts quickly, doesn’t require an anesthesiologist, and can be delivered via shot as well as IV. In those emergencies, you’re not worried about what it can’t do. 
      6. All this adds up to a very different usage profile for ketamine for animals vs. humans.

Conclusion

My goal was specifically to examine chronic, low-dose usage of ketamine. Instead, I followed the streetlight effect to one-off prolonged megadoses. I’m delighted at how safe ketamine looks in vitro, but of course, that’s not even in mice. 

Thanks

Thanks to the Progress Studies Blog Building Initiative and everyone I’ve talked to for the last three months for feedback on this post. Thanks to my Patreon supporters and the CoFoundation Fellowship for their financial support of my work

This is time consuming work, so if you’d like to support it please check out my Patreon or donate tax-deductibly through Lightcone Infrastructure (mark “for Ketamine research”, minimum donation size $100). 

Ketamine Part 1: Dosing

I’m currently investigating ketamine, with the goal of assessing the risks of chronic use. For reasons I will get into in the real post, this is going to rely mostly on in vitro data, at least for neural damage, which means I need a way to translate real-world dosages into the concentration of ketamine in the brain. This post gets into those details so I can use it as a reference post for the one you actually want to read, and invite correction early from the three of you who read it all the way through.

If you’re excited for the main post (or for some inexplicable reason, this post), you can help me out by tax-deductibly donating to support it, or joining my (not tax deductible) Patreon

Cliff Notes

What I care about for purposes of the next post is the concentration of ketamine in the brain. Unfortunately, ethics committees really hate when you set up a tap in people’s skulls and draw a fresh sample every five minutes. The best you can do is a lumbar puncture, which draws cerebrospinal fluid (CSF) from the spine. Unfortunately, spinal fluid and cranial fluid concentrations are not interchangeable. Tentatively, brain concentrations of most substances in this class tend to be lower, so we can still use CSF concentrations as a rough upper bound. 

Most long-term, medically supervised ketamine users use a  lozenge or nasal spray.. However, the only paper that measures ketamine in the cerebrospinal fluid delivered the ketamine by IV. Therefore, to have any hope of contextualizing the in vitro data, I need to translate from ketamine dose (nasal or lozenge) (measured in either straight milligrams, or milligrams per kilogram of body weight) -> plasma concentration -> CSF concentration (also measured in nano- or micro-grams of ketamine per milliliter of fluid). 

There are two metrics we might consider when comparing doses of drugs. The first is peak concentration, which is the highest dose of ketamine your brain experiences at any point. The second is the total cumulative exposure (AKA area under the curve or AUC). These unfortunately have pretty different translations from IV to sublingual doses, and unfortunately I saw no evidence about which of these was more important. I report both just in case. 

Assuming (incorrectly) a linear response, 1 mg of ketamine taken sublingually (under the tongue) or buccally (in the cheek) leads to a mean peak concentration in plasma (blood) of 0.83 – 2.8 ng/ml and a mean total dose (area under curve, AUC) of 1.8-7.4 ng/ml.

1 mg of ketamine taken via a nasal spray leads to a mean peak in plasma of 1.2 ng/ml and a mean total dose of 3 ng*h/ml (based on a single study).

Peak CSF concentration was 37% of plasma concentration, and came 80 minutes later. Total CSF dose was 92% of total plasma dose, indicating almost total diffusion into CSF, eventually. Both findings are from a single study.  

Combining these two results, 1mg sublingual ketamine leads to a median measured peak brain concentration of < 1.1 ng/ml and a total brain dose of < 6.7ng*h/ml. 

On the other hand, 1mg nasal ketamine leads to a median peak concentration of < 0.45 ng/ml and median measured total brain dose of < 2.7ng*h/ml.

The rest of the post goes more deeply into the findings and methodology of individual papers. 

Context and Caveats

Like most 3D molecules, ketamine exists in two forms that are mirror images of each other (called enantiomers). One version is sold under the name eskatamine; the other is not commercially available on its own). Some papers administered only esketamine, some both separately, and some both together (a racemic mix). The pharmacokinetics of these aren’t different enough to be worth distinguishing for my purposes. 

My list of papers is cribbed from Undermind.AI. I occasionally found papers via references, but when I checked those papers were always on Undermind’s list as well. I also looked on Perplexity, but it found only a subset of the papers on Undermind (Perplexity has tragically enshittified over the last few months). 

I treat the translation of dose to concentration in the body as linear. This is almost certainly false, but more likely to be an overestimate. 

I did not even attempt to combine results in some sort of weighted fashion, which would have incorrectly combined subtly different mechanisms of delivery. The numbers you see above are the range of values I saw. 

“Peak” concentration always means “among times samples were taken”, not actual peak.

I mentioned that I assume a linear dose-concentration curve to ketamine (meaning that if you double the amount of ketamine you take, you get double the plasma or CSF concentration). Linear sounds like a nice safe assumption, but it can go wrong in both directions. Your body may absorb a substance less efficiently as you take more, leading to an asymptotic curve. Or your body may only be able to clear so much of a substance at a time, so an increased dose has an outsized impact on concentration. In the case of ketamine there’s very mild evidence that the curve is sublinear, which makes treating it as linear an overestimate. That’s the direction I want to err on, so I went with linear. 

Translating nasal/sublingual doses to plasma concentration

To find out what dose translates to what plasma concentration, we need to give subjects ketamine through whatever route of administration, take repeated blood samples, and measure the concentration of ketamine in that blood. My ideal paper had the following traits:

  1. Studied adult humans.
  2. Delivered ketamine nasally, sublingually, or buccally.
  3. Sampled plasma at least every 10 minutes for the first hour, ideally more often. Only a handful of papers met this criteria, so I had to give a little on this criteria. 
  4. Tracked concentration and total dose received. 

Combining the results below, and assuming (incorrectly) a linear response I get the following results:

  •  1 mg of ketamine taken sublingually or buccally leads to a mean peak concentration in plasma (blood) of 0.83 – 2.8 ng/ml and a mean total dose (area under curve, AUC) of 1.8-7.4 ng/ml
  • 1 mg of ketamine taken nasally leads to a mean peak in plasma of 1.2 ng/ml and a mean total dose of 3 ng*h/ml (based on a single study).

S-Ketamine Oral Thin Film—Part 1: Population Pharmacokinetics of S-Ketamine, S-Norketamine and S-Hydroxynorketamine

This paper was definitely selling something and that thing is an “oral thin film” delivery mechanism. 

This study design is a little complicated. N=15 people were given one and two sublingual films (50mg S-ketamine each) on two separate occasions (so everyone received 150mg total, over two doses), and ordered not to swallow for 10 minutes (I have my doubts). Another 5 were given the same doses, but buccally (in the cheek). Buccal and sublingual had indistinguishable pharmacokinetics (at their tiny sample sizes) so we’ll treat them as interchangable from now on. Subjects had blood samples taken at t = 0 (= oral thin film placement) 5, 10, 20, 40, 60, 90, 120, 180, 240, 300, 360 minutes, after which they were given 20mg IV ketamine over 20 minutes, with new samples taken at 2, 4, 10, 15, 20, 30, 40, 60, 75, 90, and 120 min.

Figure 1. Mean measured plasma concentrations following application of the 50 and 100 mg S-ketamine oral thin film (OTF): (A) S-ketamine, (B) S-norketamine, and (C) S-hydroxynorketamine. Individual concentrations are given in panels (D–F) for the 50 mg oral thin film and (G–I) for the 100 mg oral thin film. In black the results of placement below the tongue, in red buccal placement. The OTF was administered at t = 0 min for 10 min (green bars); at t = 360 min, an intravenous dose of 20 mg S-ketamine was administered over 20 min (light orange bars).

There’s a few key points to take from this graph. First, sublingual (under the tongue) and buccal (between cheek and gum) are indistinguishable, at least at this sample size. Second, the 100mg sublingual dose doesn’t have near double peak concentration or AUC of the 50mg dose, although this is not statistically significant. You can see the exact numbers in table 1.

(CMAX = peak concentration, Tmax = time to peak concentration, S-norketamine = a psychoactive metabolite of ketamine, S-hydroxynorketamine=an inactive metabolite of ketamine)

Given that, we have, for peak concentration

 50 mg = 96ng/ml -> 1mg = 1.9ng/ml

100 mg = 144ng/ml -> 1mg = 1.4ng/ml

Which makes a linear dose-concentration relationship look unlikely, although at these sample sizes the difference isn’t significant. 

For AUC:

50mg = 8362 ng*min/ml     -> 1mg = 170 ng*min/ml

100mg = 13,347 ng*min/ml -> 1mg =  130ng*min/ml

Plasma concentration profiles of ketamine and norketamine after administration of various ketamine preparations to healthy Japanese volunteers

This is my favorite paper, looking at no fewer than 5 different methods of delivery. 

This isn’t the primary take-home, but because they injected racemic ketamine but measured the two enantiomers (S- and R- ketamine) we can see that their pharmacokinetics are close enough that I can ignore the difference between them, and use S-ketamine data to inform estimates of racemic ketamine. For the results below I averaged the S- and R- results together

The only routes of administration I care about from this list are sublingual tablet and nasal spray.

For peak concentration, we see:

50 mg sublingual = (42.6+40.4)/2 ng/ml -> 1 mg = 0.83ng/ml

25mg nasal spray = (29.4+29.3)/2 ng/ml  -> 1 mg = 1.2 ng/ml

For area under the curve, we see:

50 mg sublingual = (108.8+110.5)/2 ng*h/ml -> 1 mg = 2.2ng*h/ml = 130 ng*min/ml

25mg nasal spray = (76.8+72.7)/2 ng*h/ml -> 1 mg = 3 ng*h/ml = 180 ng*min/ml

The absolute bioavailability of racemic ketamine from a novel sublingual formulation

You know the drill: 8 subjects were given 25mg sublingual or 10mg IV ketamine

This paper uses geometric mean (the nth root of n numbers multiplied together) rather than arithmetic (sum of numbers divided by their count), so is not directly comparable to the other studies. But roughly, for peak concentration (Cmax) of the sublingual dose: 

25 mg ketamine  = 71.1 ng/ml -> 1mg = 2.8 ng/ml

And for total dose (AUC)

25 mg ketamine = 184.6 ng*h/ml -> 1 mg ketamine = 7.4 ng*h/ml = 443 ng*min/ml

Why I’m ignoring ketamine’s chirality

Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(–) Ketamine Concentrations

10 healthy male subjects aged 24 to 62 yr, weighing 68 to 92 kg, were administered approximately 7 mg of S(+) or R(–) ketamine via a 30-min constant rate IV infusion on two occasions at least 3 days apart. Radial artery and arm vein samples were drawn at 0, 5, 10, 15, 20, 25, 30, 40, 50, 60, 120, 180, and 300 min after the start of the S(+) ketamine infusion and at 0, 5, 10, 15, 20, 25, 30, 36, 43, 50, 180, 300, and 420 min after the start of the R(–) ketamine infusion.

Red = arterial blood, Blue = venous blood

As you can see, arterial and venous blood are quite different, but S- and R- ketamine are close enough for government work. 

Translating plasma concentrations to cerebrospinal fluid concentrations

Cerebrospinal fluid exploratory proteomics and ketamine metabolite pharmacokinetics in human volunteers after ketamine infusion

These heroes gave a dose of ketamine via IV, then monitored both plasma concentration and cerebrospinal fluid. 

Peak CSF concentration was 37% of plasma concentration, and came 80 minutes later.

Total CSF dose was 92% of total plasma dose, indicating almost total diffusion into CSF, eventually. 

Papers I Want to Complain About

Bioavailability, Pharmacokinetics, and Analgesic Activity of Ketamine in Humans

I mention this paper only to explain why I am mad at it. This 1981 study took beautiful measurements of pain sensitivity as well as plasma concentration of ketamine, and then didn’t publish any of them. They used only intramuscular injection and oral solution, which doesn’t allow me to translate to the more standard IV concentration. Also oral is a terrible route for ketamine, your body processes most of it before it hits your system. 

Population pharmacokinetics of S-ketamine and norketamine in healthy volunteers after intravenous and oral dosing

For reasons I don’t understand, this paper studies IV alone versus IV + oral ketamine together. Also not useful for our purposes, except for establishing that ketamine taken orally (as in, a pill you swallow and absorb through the digestive track) isn’t very good.

Development of a sublingual/oral formulation of ketamine for use in neuropathic pain

This paper measured concentration in arterial blood, where every other paper used venous blood. One paper that measured both showed that they were shockingly different. I could attempt to translate from arterial to venous concentrations, but the paper also uses an unpopular delivery mechanism so I haven’t bothered. 

Acknowledgements

Thanks to R. Craig Van Nostrand for statistical and paper-reading help, Anonymous Weirdo for many discussions on pharmacokinetics, Ozy Brennan and Justis Mills for editing, and my Patreon patrons and Timothy Telleen-Lawton for financial support. 

Journal of Null Results: EZMelt sublingual vitamins

4 months ago I described my success curing my hypothyroidism by gargling liquid iodine, when iodine pills had failed. The good news is that the cure has held– my thyroid numbers continue to be in the desirable range. 

The bad news is I’ve failed to replicate this success with a multivitamin. Shortly after the thyroid post I was handed a perfect opportunity to put sublingual vitamins to the test when my doctor took me off all my oral vitamins to give my gut a rest. I had already started on EZMelt Multivitamin + Iron (2x standard dosing every other day, because I absorb iron better that way), but now we’d removed all potential assistance (“except food, right?” no. My gut has never been good at extracting vitamins from food except right after I discovered Boswelia. Mold Winter rolled back those gains).

I recently got my nutrition test results back and they suck. I can’t prove I wouldn’t have been even worse off without these vitamins, but there’s a profound absence of positive evidence. However the issue could just be these particular vitamins; after a break I’m now trying Feroglobin, which is a thick liquid iron supplement with a smattering of other vitamins. It’s not intended to be taken sublingually but I don’t live by their rules, man.

Between getting the results and publishing this post I made a market on Manifold, asking whether the EZMelts would work. The market was trading just under 50% “no, not helpful” for most of the week, but in the final hours fluctuated between 30-40% “no”. Seems like a very mild victory for prediction markets. 

I’ve created a similar market for Feroglobin here. This run is not going to be quite as clean- my doctor put me back on oral vitamins, plus I finally found a place that does IV nutrition. So this will be more of a best guess, probably resolved as a probability rather than flat Yes/No. 

Feedback loops for exercise (VO2Max)

The perfect exercise doesn’t exist. The good-enough exercise is anything you do regularly without injuring yourself. But maybe you want more than good enough. One place you could look for insight is studies on how 20 college sophomores responded to a particular 4 week exercise program, but you will be looking for a long time. What you really need are metrics that help you fine tune your own exercise program.

VO2max (a measure of how hard you are capable of performing cardio) is a promising metric for fine tuning your workout plan. It is meaningful (1 additional point in VO2max, which is 20 to 35% of a standard deviation in the unathletic, is correlated with 10% lower annual all-cause mortality), responsive (studies find exercise newbies can see gains in 6 weeks), and easy to approximate (using two numbers from your fitbit). 

In this post I’m going to cover the basics of VO2max, why I estimate such a high return to improvements, and what kind of exercise can raise it the fastest.

What is VO2max?

A person’s VO₂ max is the maximum volume of oxygen they can consume in one minute. Higher VO2max lets you cardio more intensely, and is correlated with better health and longer lifespan (we’ll quantify this later). This is 100% of what you need to know, the rest is thrown in for fun. 

VO2max is measured in ml O2/kg bodyweight/minute. It is sometimes given in Metabolic Equivalents (METs). 1 MET = 3.5ml O2/kg of bodyweight/minute. This is approximately your metabolic expenditure while sitting still. 

What physically causes increase in VO2max? It’s a mix of many factors:

  1. Strengthened heart allows you to pump blood faster
  2. Improved lung capacity, which breaks down to
    1. Expansion of the chest cavity, in part due to strengthening of the diaphragm and rib muscles.
    2. Recruitment of new alveoli (the features in your lungs that exchange carbon dioxide and oxygen) 

(source)

  1. Improved lung elasticity
  2. Production of a surfactant that maintains alveoli in fighting form 
  1. Increased mitochondrial activity allows cells (especially muscle cells) to use more oxygen
  2. More blood to carry the oxygen
  3. New capillaries grow to deliver more blood to your muscles

What can induce these changes? Exercise, especially high intensity interval training. We’ll talk more about that in a bit. 

Why do I care about VO2Max?

Obligatory boring part: VO2max is a crude measurement whose impact depends on many factors blah blah blah blah

Shocking headline: 1 MET (aka 3.5 points VO2max ) = 10% reduction in relative risk of all-cause mortality. So if your normal risk of death is 1%, gaining one MET would lower it to 0.9%).

The catch: that meta-analysis averaged together results from multiple studies of very different durations. “That’s okay, they could correct for that, at least crudely” you might be saying to yourself, in which case, congratulations on being better at meta-analysis than these authors, who AFAICT dumped every study into a bag and shook it. 

More realistic, yet more shocking headline: an increase of 1 point in VO2max is correlated with 10% lower annual all-cause mortality. 

This is based on the largest study in the meta-analysis, Kokkinos et al. Important facts from this study include: 

  1. In male veterans, going from low fitness to moderate fitness (defined below) lowered risk of dying by 40%. This was shockingly consistent across age groups, and whether you considered a 5 year or 10 year period. Getting to a high fitness level dropped their mortality rate by another 30%.
  2. I too am wondering why the % change in risk of death doesn’t get larger when you consider a longer period of time. 
    1. The middle (“threshold”) range for 4 age categories were 8 to 9, 7 to 8, 6 to 7, and 5 to 6 METs for <50, 50 to 59, 60 to 69, and ≥70 years, respectively. Another source gives average MET for those categories (substituting 40-50 for <50 and 70-80 for >70)  as 10, 8.6, 7.3, and 6.1, so the threshold starts 1-2 points lower than average and they converge by your 70s.
  3. Low fitness is the range between the floor of the threshold, and 2 METs lower than that, moderate fitness is the ceiling of the threshold plus 2 METs. If distribution within buckets were uniform, we could treat moving from low fitness to moderate fitness as an increase of 2 METs. If you assume a normal distribution centered around the threshold, it’s somewhat smaller than that.I went with the latter assumption, but not very rigorously.

Caveats

VO2max is measured per kg of total body weight, not lean weight. That means that if you lost 10% of your bodyweight via liposuction but otherwise stayed exactly the same, your VO2max would rise by 1/0.9. This makes VO2max a partial proxy for weight. However the relationship between weight and health, and weight and exercise, is much more complicated than is typically acknowledged. 

VO2 is also a proxy for exercise. Right now we don’t have enough information to say that increased VO2, or increased aveoli surfactant, increases lifespan or is merely downstream of exercise that does some other helpful thing. 

I’m going to ignore both of these for now, but when you’re doing your own math you should not add effects from potential weight loss, because that might be double counting.

Exercise science sucks. Lifespan is affected by 1000 different factors, none of which scientists can properly control. Lots of researchers have their bottom line already written.

While we’re at it, I should note that I haven’t done deep investigations on any other metrics. Very early in the process I considered others, and VO2max won due to a combination of being promising and easy to measure at home. I don’t have the information to say if VO2max is more or less accurate than other metrics.

How can I measure my VO2Max?

(note: this section is based primarily off of Client’s research, not mine)

The official way involves a mask and measuring equipment and 20 minutes of excruciatingly intense exercise. This is technically the most accurate, but only if it’s set up properly, and is expensive. If you’d like to trade accuracy for ease, use this formula

VO2max ≈ (HRmax/HRrest) ∗ magic_constant

If you would like to get a number without understanding it, you can enter your heart rate in this spreadsheet. If you would like to learn about the magic constant, I’ve defined the terms below. 

  • HRRest is your lowest heart rate when measured first thing in the morning, or ask your friendly neighborhood wearable. 
  • HRMax is your heart rate after exercising at ever increasing intensity until you cannot stand it. If you don’t know this, you can use 208 − 0.7 ∗ age. However if you do so you’ll miss any gains that come from increasing your maximum heart rate, which I’d expect to be at least half. 
  • magic_constant = 17.27 − 0.08 ∗ age − 0.59 ∗ BMI_category − 0.40 ∗ smoking_status + 0.14 ∗ TPA
  • BMI_category =
    • normal: 0
    • overweight: 1
    • obese: 2
  • smoking_status:
    • never: 0
    • former: 1
    • current: 2
  • TPA (total physical activity) =
    • moderate: 2 (< 43 MET hours / day)
    • active: 1 (43 – 50 MET hours / day)
    • highly active: 0 (> 50 MET hours / day)
    • This definition is circular, because MET hours is a function of hours exercised * exertion level. A decent level of physical fitness will burn 10 MET per hour of very intense exercise. 

You may be tempted to use wearable-calculated VO2Max.  This is a bad idea because your device has no way to separately track how hard you are working from how hard your heart is beating (Apple Watch attempts this, but simplifies things by assuming all exercise is running on a flat surface).

What are you aiming for?  Here is a convenient chart (source). This is measured in ml/kg/min, not METs.

How can I raise my VO2Max?

The best exercise is still the one you do consistently without injuring yourself. Optimization within that is for people who have many choices they enjoy, or who don’t enjoy any but can nonetheless force themselves to work out reliably. 

The next best exercises appear to be rich people sports (lifespan wise, you’re better off being an amateur raquetballer than an olympic marathoner, despite racquetball’s barely-above-average VO2max). I didn’t find numbers for polo players but I assume they’re stunning. We’re going to ignore these findings even though the papers claim to have controlled for income. 

After that, you have two choices: high intensity interval training (HIIT), and using cross country skiing as your regular mode of transportation. 

Why those two? No one has proven an answer, but my wild ass speculation is that you raise VO2max by proving to your body that your existing VO2max is insufficient. You do this by operating at capacity. Since it’s impossible to operate at peak capacity for very long, this can be done in the form of interval training, or by working at near-peak capacity for so long that it uses up your reserves. Or so I surmise.

Back to the literature: within interval training, the number one most important property is still that you do it at all, followed by how much you do it, with one possible cheat. According to this meta-analysis even short interval, low volume, low calendar-time was beneficial, but in order to beat moderate-intensity exercise you need to work a little harder: intervals of >2 minutes, total time of >15 minutes, and at least 4 weeks (number of times per week was not specified, but in other papers it was 2-3).

What’s the cheat? Repeated Sprint Training (RST), in which you go absolutely balls out for 10 seconds and then take a nice 2-4 minute gentle stroll. I love RST because there’s a little bit of lag between working very hard and being miserable, and that lag is longer than the interval. By the time the misery catches up with me I’ve already stopped trying. So I’d really like to believe this, but ShortIT (10-30 second intervals) scored poorly relative to longer intervals, so there’s either some sort of horseshoe effect or the success of RST is a mirage. 

Here is the full chart from that paper, which is beautiful except for its absolutely incomprehensible labels. Translations below. 

Within Training Periods (how many weeks people exercised according to the plan), the options are short (<= 4 weeks), medium, and long term (>=12 weeks).

Within Session Volume, the options are low (<=4 cumulative minutes under load), medium, and high (>=16 minutes of work). Please join me in a moment of annoyance that  L sometimes mean smallest and sometimes biggest.

Within Work Intervals (duration of a single intense bout), the options are short, medium, long, very very short (SIT) (10-30s) and itty bitty (RST) (10s).

MICT stands for “moderate intensity cardio training”, aka non-HIIT exercise. CON stands for control. The longer you go (in calendar time) the less of an advantage HIIT has over MICT, which suggests they are both approaching the same asymptote, HIIT just gets there faster. 

SMD stands for “standard mean difference”, which is the difference of the means of the treatment and control groups, divided by the standard deviation. The size of SMD differs between the treatment groups, but you can round it to 3 ml O2/kg body weight/person. 

What if I already exercise?

In one study, even Olympic athletes were able to raise VO2max via HIIT training (albeit slower than couch potatoes). If you’re not specifically targeting peak capacity, you can probably improve it. However I believe this asymptotes, so if you’re already doing HIIT in particular there may not be much gains left on the table. The client who commissioned this research was a hard-core pilates practitioner and he did not find HIIT to increase his VO2. 

Next Steps

Iamnotadoctor, nor do I hold any other relevant qualifications. But if you’re full of inspiration to follow up on this, here is my suggested plan:

  1. Estimate your VO2Max as described above, or use the spreadsheet.
  2. Identify a form of exercise that is highly accessible to you, that can be done safely at very high intensity.
    1. The more of your body it uses the better, but prioritize lowering obstacles. if your office only has an exercise bike that’s better than needing to travel to an elliptical, even though the elliptical uses your arms and the bike doesn’t.
  3. If you’re new to exercise, spend a few sessions playing around on your activity of choice, to get a sense of where your limits are.
  4. If you believe the research on RST (10 seconds of peak exertion followed by 3 minutes of barely moving. If your environment is cold enough you shouldn’t even sweat), do that. 
  5. If you don’t believe the research on RST, gradually increase your time under intensity until you reach 4 non-continuous minutes under intense load.
    1. If your intense intervals are longer than 2 minutes they’re probably not actually peak intensity, so you should have at least 2. 
    2. Especially at first, aim for sustainability rather than peak achievement. If going 20% slower is the difference between quitting or sticking through it, slower is obviously the correct choice. You can build up over time. 
  6. After 6 weeks, estimate VO2max again. The meta-analysis described above suggests you can expect at least 1 MET (3.5 ml O2/kg body weight/min) over 6 weeks. 

Thanks to anonymous client and my Patreon patrons for supporting this post.

Luck Based Medicine: No Good Very Bad Winter Cured My Hypothyroidism

I’ve previously written about Luck Based Medicine: the idea that, having exhausted all the reasonable cures for some issue, you are better off just trying shit rather than trying to reason more cures into existence. I share LBM success stories primarily as propaganda for the concept: the chance any one cure works for anyone else is <10% (sometimes much less), but a culture where people try things and share their results.

I’ve also previously written about my Very Unlucky Winter. My mattress developed mold, and in the course of three months I had four distinct respiratory infections, to devastating effect. A year later I am still working my way through side effects like asthma and foot pain. 

But, uh, I also appear to have cured my hypothyroidism, and the best hypothesis as to why is all the povidone iodine I gargled for all those respiratory infections illnesses.

Usually when I discuss fringe medicine I like to say “anything with a real effect can hurt you”, because it’s a nice catchall for potential danger. In this case, I can be more direct: anything that cures hypothyroidism has a risk of causing hyperthyroidism. The symptoms for this start with “very annoying” and end at “permanent disability or death”, so if you’re going to try iodine, it absolutely needs to be under medical supervision with regular testing. 

All that said…

I was first diagnosed with hypothyroidism 15 years ago, and 10 years ago tried titrating off medication but was forced back on. My thyroid numbers were in the range where mainstream MDs would think about treating and every ND, NP, or integrative MD would treat immediately. 

Low iodine can contribute to hypothyroidism, and my serum iodine tested at low normal for years, so we had of course tried supplementing iodine via pills, repeatedly, to no result. No change in thyroid and no change in serum iodine levels.

In January of the Very Unlucky Winter, I caught covid. I take covid hard under the best of circumstances and was still suffering aftereffects from RSV the previous month, so I was quite scared. Reddit suggested gargling povidone iodine and after irresponsibly little research, I tried it. My irresponsibility paid off in that the covid case was short and didn’t reach my lungs. I stopped taking iodine when I recovered but between all the illnesses, potential illnesses, and prophylactic use I ened up using it for quite a long period.

My memories of this time are very fuzzy and there were a lot of things going on, but the important bits are: I developed terrible insomnia, hand tremors, and temperature regulation issues. These had multiple potential explanations, but one of them was hyperthyroidism so my doctor had me tested. Sure enough, I had healed my thyroid to the point my once-necessary medication was giving me hyperthyroidism. 

Over the next few months I continued gargling with iodine and titrating my medication down. After ~6 months I was off it entirely. I’ve since been retested twice (6 weeks and 20 weeks after ceasing medication) and it looks like I’m clean. 

Could this have been caused by something besides iodine? I suppose, and I was on a fantastic number of pills, but I can’t figure out what else it could be. Hypothyroidism has a very short list of curable underlying causes, and none of them are treated by anything I was taking. 

So why did gargling iodine work when pills didn’t? It could be the formulation, but given my digestive system’s deepseated issues, I’m suspicious that the key was letting the iodine be absorbed through the mucous membrane of the throat, rather than attempting to through the gut. If that’s true, maybe I can work around my other unresponsive vitamin deficiencies by using sublingual multivitamins. I started them in June and am waiting to take the relevant test.

Thank you to my Patreon patrons for their support of this work. 

There is a $500 bounty for reporting errors that cause me to change my beliefs, and an at-my-discretion bounty for smaller errors. 

(Salt) Water Gargling as an Antiviral

Summary

Over the past year I’ve investigated potential interventions against respiratory illnesses. Previous results include “Enovid nasal spray is promising but understudied”, “Povidone iodine is promising but understudied” and “Humming will solve all your problems no wait it’s useless”. Two of the iodine papers showed salt water doing as well or almost as well as iodine. I assume salt water has lower side effects, so that seemed like a promising thing to check. I still believe that, but that’s about all I believe, because papers studying gargling salt water (without nasal irrigation) are few and far between. 

I ended up finding only one new paper I thought valuable that wasn’t already included in my original review of iodine, and it focused on tap water, not salt water. It found a 30% drop in illness when gargling increased in frequency from 1 time per day to 3.6 times, which is fantastic. But having so few relevant papers with such small sample sizes has a little alarm going off in my head screaming publication BIAS publication BIAS. So this is going in the books as another intervention that is promising but understudied, with no larger conclusions drawn. 

Papers

Estimating salivary carriage of severe acute respiratory syndrome coronavirus 2 in nonsymptomatic people and efficacy of mouthrinse in reducing viral load: A randomized controlled trial

Note that despite the title, they only gave mouthwashes to participants with symptoms.

This study had 40 participants collect saliva, rinse their mouth with one of four mouthwashes, and then collect more saliva 15 and 45 minutes later . Researchers then compared compared the viral load in the initial collection with the viral load 15 and 45 minutes later. The overall effect was very strong: 3 of the washes had a 90% total reduction in viral load, and the loser of the bunch (chlorhexidine) still had a 70% reduction (error bars fairly large). So taken at face value, salt water was at least as good as the antiseptic washes. 

(Normal saline is 0.9% salt by weight, or roughly 0.1 teaspoons salt per 4 tablespoons water)

[ETA 11/19: an earlier version of this post incorrectly stated 1 teaspon per 4 tablespoons. Thank you anonymous]

This graph is a little confusing: both the blue and green bars represent a reduction in viral load relative to the initial collection. Taken at face value, this means chlorhexidine lost ground between minutes 15 and 45, peroxide and saline did all their work in 15 minutes, and iodine took longer to reach its full effect.  However, all had a fairly large effect.

My guess is this is an overestimate of the true impact, because I expect an oral rinse to have a greater effect on virons in saliva than in cells (where the cell membrane protects them from many dangers). Saline may also inflate its impact by breaking down dead RNA that was detectable via PCR but never dangerous. 

The short-term effect of different chlorhexidine forms versus povidone iodine mouth rinse in minimizing the oral SARS-CoV-2 viral load: An open label randomized controlled clinical trial study

This study had a fairly similar experimental set up to the previous: 12 people per group tried one of three mouth washes, or a lozenge. Participants collected saliva samples immediately before and after the treatments, and researchers compared (a proxy for) viral loads between them.

Well, kind of. The previous study calculated the actual viral load and compared before and after. This study calculated the number of PCR cycles they needed to run before reaching detectable levels of covid in the sample. This value is known as cycle threshold, or Ct. It is negatively correlated with viral load (a smaller load means you need more cycles before it becomes detectable), but the relationship is not straightforward. It depends on the specific virus, the machine set up, and the existing cycle count. So you can count on a higher Ct count representing an improvement, but a change of 4 is not necessarily twice as good as a change of 2, and a change from 30->35 is not necessarily the same as a change from  20->25. The graph below doesn’t preclude them doing that, but doesn’t prove they did so either. My statistician (hi Dad) says they confirmed a normal distribution of differences in means before the analysis, which is somewhat comforting. 

This study found a significant effect for iodine and chlorhexidine lozenges, but not saline or chlorhexidine mouthwash. This could be accurate, an anomaly from a small sample size, or an artifact of the saline group having a higher starting Ct value (=lower viral load) to start from.

Prevention of upper respiratory tract infections by gargling: a randomized trial

This study started with 387 healthy volunteers and instructed them to gargle tap (not salt) water or iodine at least three times a day (the control and iodine group also gargled water once per day). For 60 days volunteers recorded a daily symptom diary. This set up is almost everything I could ask for: it looked at real illness over time rather than a short term proxy like viral load, and adherence was excellent. Unfortunately, the design were some flaws. 

Most notably, the study functionally only counted someone as sick if they had both nose and throat symptoms (technically other symptoms counted, but in practice these were rare). For a while I was convinced this was disqualifying, because water gargling could treat the pain of a sore throat without reducing viral load. However the iodine group was gargling as often as the frequent watergarglers, without their success. Iodine does irritate the throat, but gargling iodine 3 times per day produced about as much illness as water once per day. It seems very unlikely that iodine’s antiviral and throat-irritant properties would exactly cancel out. 

Taking the results at face value, iodine 3x/day + water 1x/day was no better than water 1x/day on its own. Water 3.6x/day led to a 30% reduction in illness (implicitly defined as lacking throat symptoms)

The paper speculates that iodine failed because it harmed the microbiome of the throat, causing short term benefits but long term costs. I liked this explanation because I hypothesized that problem in my previous post. Alas, it doesn’t match the data. If iodine traded a short term benefit for long term cost, you’d expect illness to be suppressed at first and catch up later. This is the opposite of what you see in the graph for iodine. However it’s not a bad description of what we see for frequent water gargling – at 15 days, 10% more of the low-frequency water garglers have gotten sick. At 50 days it’s 20% more – fully double the proportion of sick people in the frequent water gargler group. For between 50 and 60 days, the control group stays almost flat, and the frequent water garglers have gone up 10 percentage points. 

What does this mean? Could be noise, could be gargling altering the microbiome or irritating the throat, could be that the control group ran out of people to get sick. Or perhaps some secret fourth thing.

None of the differences in symptoms-once-ill were significant to p<0.05, possibly as a result of their poor definition of illness, or the fact that the symptom assessment was made a full 7 days after symptom onset.

Assuming arguendo that gargling water works, why? There’s an unlikely but interesting idea in another paper from the same authors, based on the same data. They point to a third paper that demonstrated dust mite proteins worsen colds and flus, and suggest that gargling helps by removing those dust mite proteins. Alas, their explanation of why this would help for colds but not flus makes absolutely no goddamn sense, which makes it hard to trust an already shaky idea. 

A boring but more reasonable explanation is that Japanese tapwater contains chlorine, and this acts as a disinfectant. 

Dishonorable Mention: Vitamin D3 and gargling for the prevention of upper respiratory tract infections: a randomized controlled trial

I silently discarded several papers I read for this project but this one was so bad I needed to name and shame.

The study used a 2×2 analysis examining vitamin D and gargling with tap water. However it was “definitively” underpowered to detect interactions, so they combined the gargling with and without vitamin D vs. no gargling with and without D into groups, without looking for any interaction between vitamin D and gargling. This design is bad and they should feel bad. 

Conclusion

Water (salted or no) seems at least as promising an antiviral as other liquids you could gargle, with a lower risk of side effects. So if you’re going to gargle, it seems like water is the best choice. However I still have concerns about the effect of longterm gargling on the microbiome, so I am restricting myself to high risk situations or known illness. However the data is sparse, and ignoring all of this is a pretty solid move. 

Thank you to Lightspeed Grants and my Patreon patrons for their support of this work. Thanks to Craig Van Nostrand for statistical consults.

There is a $500 bounty for reporting errors that cause me to change my beliefs, and an at-my-discretion bounty for smaller errors. 

Humming is not a free $100 bill

Last month I posted about humming as a cheap and convenient way to flood your nose with nitric oxide (NO), a known antiviral. Alas, the economists were right, and the benefits were much smaller than I estimated.

The post contained one obvious error and one complication. Both were caught by Thomas Kwa, for which he has my gratitude. When he initially pointed out the error I awarded him a $50 bounty; now that the implications are confirmed I’ve upped that to $250. In two weeks an additional $750 will go to either him or to whoever provides new evidence that causes me to retract my retraction.

Humming produces much less nitric oxide than Enovid

I found the dosage of NO in Enovid in a trial registration. Unfortunately I misread the dose-  what I original read as  “0.11ppm NO/hour” was in fact “0.11ppm NO*hour”. I spent a while puzzling out what this meant, with the help of Thomas Kwa, some guy on twitter, and chatGPT (the first time it’s been genuinely useful to me). My new interpretation is that this means “actual concentration upon application*1 hour/time at that concentration”. Since NO is a transient molecule, this means my guess for the amount of NO in Enovid was off by 2-3 orders of magnitude.

My estimates for the amount of NO released by humming may also be too high. I used this paper’s numbers for baseline NO concentration. However the paper I used to estimate the increase gave its own baseline number, which was an order of magnitude lower than the first paper.

This wasn’t intentional cherrypicking- I’d seen “15-20x increase in concentration” cited widely and often without sources. I searched for and spotchecked that one source but mostly to look at the experimental design. When I was ready to do math I used its increase but separately looked up the baseline concentration, and found the paper I cited.

I just asked google again and got an even higher estimate of baseline nasal concentration, so seems like there is a great deal of disagreement here.

If this were the only error I’d spend the time to get a more accurate estimate. But it looks like even the highest estimate will be a fraction of Enovid’s dose, so it’s not worth the energy to track down.

Using the new values, you’d need 28 minutes of humming to recreate the amount of NO in Enovid (spreadsheet here). That wouldn’t be so bad spread out over 4-6 hours, except that multiple breaths of humming in a row face diminishing returns, with recovery to baseline taking 3 minutes. It is possible to achieve this in 6 hours, but only just. And while it’s not consequential enough to bother to look it up, I think some of the papers applied Enovid more often than that.

This leaves humming in search of a use case. People who care a lot about respiratory illnesses are better off using Enovid or another nasal spray. People who don’t care very much are never going to carefully pace their humming; and the amount of humming they might do won’t be very effective. The only use case I see is people who care a lot and are pushed into a high risk situation without notice, or who want a feeling of of Doing Something even if it is not doing very much at all.

Reasons to not write off humming entirely

The math above assumes the effect is linear with the amount of NO released, regardless of application time. My guess is that frequent lower doses are more effective than the same amount as a one off. Probably not one effective enough to give humming a good non-emergency use case though.

Another possibility is that Enovid has more nitric oxide than necessary and most of it is wasted. But again, it would have to be a lot moreto make this viable.

Conclusions

Humming hasn’t been disproven as an anti-viral intervention, but the primary reason I believed it worked has been destroyed. I will be observing a six week period of mourning for both my hope in humming and generally feeling dumb.

The fact that I merely feel kind of dumb, instead of pricing out swords with which to commit seppuku, is thanks to the little angel that sits on my shoulder while I write. It constantly asks “how will you feel about this sentence if you turn out to be wrong?” and demands edits until the answer is either “a manageable amount of unhappy” or “That’s not going to come up”. This post thoroughly tested her work and found it exemplary, so she will be spending the next six weeks partying in Vegas.

[RETRACTED] Do you believe in hundred dollar bills lying on the ground? Consider humming

Introduction

[Reminder: I am an internet weirdo with no medical credentials]

A few months ago, I published some crude estimates of the power of nitric oxide nasal spray to hasten recovery from illness, and speculated about what it could do prophylactically. While working on that piece a nice man on Twitter alerted me to the fact that humming produces lots of nasal nitric oxide. This post is my very crude model of what kind of anti-viral gains we could expect from humming.

ETA 6/6: I made a major error in this post and its numbers are incorrect. The new numbers show that matching Enovid’s nitric oxide content, or even getting close enough for a meaningful effect, takes way more humming than anyone is going to do.

I’ve encoded my model at Guesstimate. The results are pretty favorable (average estimated impact of 66% reduction in severity of illness), but extremely sensitive to my made-up numbers. Efficacy estimates go from ~0 to ~95%, depending on how you feel about publication bias, what percent of Enovid’s impact can be credited to nitric oxide, and humming’s relative effect. Given how made up speculative some of these numbers are, I strongly encourage you to make up  speculate some numbers of your own and test them out in the guesstimate model.

If you want to know how nitric oxide reduces disease, check out my original post.

Math

Estimating the impact of Enovid 

I originally estimated the (unadjusted) efficacy of nitric oxide nasal sprays after diagnosis at 90% overall reduction in illness, killing ~50% of viral particles per application. Enovid has three mechanisms of action. Of the papers I looked at in that post, one mentioned two of the three (including nitric oxide) a second mechanism but not the third, and the other only mentioned nitric oxide. So how much of theat estimated efficacy is due to nitric oxide alone? I don’t know, so I put a term in the guesstimate with a very wide range. I set the lower bound to ⅓ (one of three mechanisms) to 1 (if all effect was due to NO). 

There’s also the question of how accurate the studies I read are. There are only two, they’re fairly small, and they’re both funded by Enovid’s manufacturer. One might reasonably guess that their numbers are an overestimate. I put another fudge factor in for publication bias, ranging from 0.01 (spray is useless) to 1 (published estimate is accurate).

How much nitric oxide does Enovid release?

This RCT registration uses a nitric oxide nasal spray (and mentions no other mechanisms). They don’t give a brand name but it’s funded by the company that produces Enovid. In this study, each application delivers 0.56 mL of nitric oxide releasing solution (NORS) (this is the same dose you get from commercial Enovid), which delivers “0.11ppm [NO]*hrs”. 

There’s a few things that confusing phrase could mean:

  • The solution keeps producing 0.11ppm NO for several hours (very unlikely). 
  • The application produces 0.88ppm NO almost immediately (0.11*8, where 8 hours is the inter-application interval), which quickly reacts to form some other molecule. This is my guess, and what I’ll use going forward. It won’t turn out to matter much. 
  • Some weirder thing. ETA 5/25: Thomas Kwa points out that the registration says “0.11ppm*hrs” not “0.11ppm/hr”. I’m on a tight deadline for another project so haven’t been able to look into this; it definitely seems like my interpretation is wrong, but I’m not sure his is right. I’ve reached out to some biology friends for help.

How much nitric oxide does humming move into the nose?

Here we have much more solid numbers. NO concentration is easy to measure. Individuals vary of course, but on average humming increases NO concentration in the nose by 15x-20x. Given baseline levels of (on average) 0.14ppm in women and 0.18ppm in men, this works out to a 1.96-3.42 ppm increase. More than twice what Enovid manages.

The dominant model is that the new NO in the nose is borrowed from the sinuses rather than being newly generated. Even if this is true I don’t think it matters; sinus concentrations are 100x higher than the nose’s and replenish quickly. 

Estimating the impact of humming

As far as I can find, there are no published studies on  humming as an antimicrobial intervention. There is lots of circumstantial evidence from nasal vs. mouth breathing, but no slam dunks. So I’m left to make up numbers for my Guesstimate:

  • Paper-reported decline in illness due to spray (0.9) 
  • Proportion of effect due to NO (0.33 to 1)
  • Adjustment for publication bias (.01 to 1)
  • Adjustment for using prophylactically rather than after diagnosis (0.75 to 2.5) (set this to 1 if you want to consider post-diagnosis use)
  • Bonus to humming due to higher NO levels and more frequent application (1 to 5) 
  • I capped the results so they couldn’t suggest that the effect size was less than 0  or greater than 1, and then applied the nasal-infection discount. 
  • Proportion of infections starting in the nose (because infections in the throat should see no effect from humming) (0.9 to 1) (set this to 1 if you believe the spray effect estimate already includes this effect)

From that I get an estimate of effect of 0 to 0.98, with an average of 0.67. This is of course incredibly sensitive to assumptions I pulled out of my ass. If you prefer numbers from your own ass, you can enter them into my model here. For comparison, microcovid.org estimates that masks have an efficacy against of 33% (for thick, snug cloth masks) to 87% (well-sealed n95s). 

How to hum

Here is what I’ve advised my elderly parents, and will use myself once I find a way to keep it from activating the painful nerve damage in my jaw:

  • This really is normal humming, just be sure to exhale entirely through your nose.
    • If you google “how to hum” you will mostly get results on singing exercises, which I think are suboptimal. This very annoying video has decent instructions on how to hum with your lips sealed. 
    • Higher pitch (where the vibration lives more in the nose and less in the throat) should be more effective, but making it easy to do is probably more important.
    • You only need to do one breath per session, after that you face diminishing returns.
  • Once per hour is probably overkill, but it’s also easy to remember. Alternately, pick a trigger like entering a room or opening Twitter.
    • A beta reader asked if it was worth waking up in the middle of the night to hum. I’m still not a doctor, but my immediate reaction was “Jesus Christ no”. Sleep is so important, and once per hour is a number I made up for convenience. However if you happen to wake up in the middle of the night, I expect that’s an especially valuable time to hum.
  • The less time between exposure and humming, the better. Since you can’t always know when you’ve been exposed, this suggests humming during and after every high risk event, or making it an everyday habit if you find it cheap.
  • How long after? For Enovid I made up a plan to use it for one full day after the last high risk period, which my very crude math estimates gives your body an extra day to ramp up your immune system. 

Are there downsides?

Everything with a real effect has downsides.  I’m struggling to come up with ones that won’t be immediately obvious, like vibrating a broken nose or annoying your spouse, but I’ve been surprised before.

One possible source of downsides is that the nitric oxide was more valuable in the sinuses than the nose. This doesn’t worry me much because sinus levels are 100x nasal levels, and judging from the exhalation experiments sinus levels completely recover in 3 minutes. 

The barest scraps of other evidence

This (tiny) study found that Bhramari Pranayama (which includes humming) reduced sinusitis more than conventional treatment. But the same size of 30 (per group) and lack of a no-treatment group makes this hard to take seriously.

There appeared to be a plethora of literature that nasal breathers had fewer respiratory infections than mouth breathers. I wouldn’t find this convincing even every study showed a strong effect (because it’s over such a long time period and impossible to track causality), so I didn’t bother to investigate. 

Some dude may or may not have eliminated his chronic rhinosinusitis (inflammation of nose and sinuses) that may or may not have had an infectious component by humming, which may or may not have worked by increasing nasal nitric oxide. He used a very different protocol that to my eye looks more likely to work via sheer vibration than by nitric oxide, especially because a lot of his problem was located in the sinuses.

Reasons to disbelieve

  1. If my model is correct, humming is the equivalent of finding a paper sack full of hundred dollar bills on the ground. Both the boost from humming and the immune function of NO have been known for decades; medical research would have to be really inadequate to produce so little data on this. 
  2. All of the data on the impact of nasal nitric oxide is on covid; maybe NO is less effective on other viruses.
  3. If nasal nitric oxide is so great, why did evolution give us the nasal NO concentration it did?
    1. I love me a good evolution-based argument, but I think they’re at their weakest for contagious diseases. Relative to the ancestral environment we have a much easier time finding calories to fuel our immune system and diseases with which to keep it busy, so we should expect our immune systems to be underpowered. 
  4. If humming has any effect outside the nose, it has got to be tiny. 

Conclusion

Hourly nasal humming might be as effective as masks at reducing respiratory infections. The biggest reasons to disbelieve are the paucity of data, and skepticism that society would miss something this beneficial. If you’re the kind of person who looks at an apparent hundred dollar bill on the ground and gets excited, humming seems like an unusually good thing to try. But if the pursuit of loose bills feels burdensome or doomed, I think you should respect your instincts.

I have an idea for how to generate more data on humming and respiratory illnesses, but it requires a large conference in winter. If you’re running a conference with 500+ nerds, in your local winter, with a majority of attendees coming from locations in local winter, I’d love to chat. You can reach me at elizabeth@acesounderglass.com.

Betadine oral rinses for covid and other viral infections

Before we get started, this is your quarterly reminder that I have no medical credentials and my highest academic credential is a BA in a different part of biology (with a double major in computer science). In a world with a functional medical system no one would listen to me. 

Tl;dr povidone iodine probably reduces viral load when used in the mouth or nose, with corresponding decreases in symptoms and infectivity. The effect size could be as high as 90% for prophylactic use (and as low as 0% when used in late illness), but is probably much smaller. There is a long tail of side-effects. No study I read reported side effects at clinically significant levels, but I don’t think they looked hard enough. There are other gargle formulas that may have similar benefits without the risk of side effects, which are in my queue to research.

Benefits

Math

One paper found a 90% decrease in salivary viral load after mouthwash use (which probably overestimates the effect). Another found a 90% reduction in bad outcomes, with treatment (in mouth, nose, and eyes) starting soon after diagnosis. I suspect both of these are overestimates but 1. 90% reduction is a fantastic upper bound to have 2. Neither of these looked at prophylactic use. A third study found a significant reduction in viral DNA after usage, but did not quantify that by viral load or outcomes. 

I feel like if povidone iodine was actually that good we’d have heard about it before. OTOH mouthwash formulations are barely available in the US, and most of these studies were in Asia, so maybe it went to fixation there years ago and the west is just catching up. 

So I’m going to call this 9-45% reduction in illness timeXintensity when used after symptom onset. Before onset ought to be higher, my wild ass guess is up to 90%. 

One reason I think earlier use is better is that, at least with covid, most of the real damage happens when the virus reaches the lungs. If iodine gargles can form a firewall that prevents an upper respiratory infection from becoming a lower respiratory infection, you’ve prevented most (although not all) of the worst outcomes.

Papers

I livetweeted every paper I read, collected here. I don’t want to brag, but those tweets were very popular among ladies with large boobs and 10 numbers in their twitter handles. So if that’s your type you should definitely check out those threads. Everyone else will probably find them tedious, so I’m going to summarize the most relevant papers here.

Estimating salivary carriage of severe acute respiratory syndrome coronavirus 2 in nonsymptomatic people and efficacy of mouthrinse in reducing viral load: A randomized controlled trial

This study had participants rinse their mouth with one of four mouthwashes, and compared the pre-mouthwash salivary viral load with the viral load 15 and 45 minutes later. The overall effect was very strong: 3 of the washes had a 90% total reduction, and the loser of the bunch still had a 70% reduction (error bars fairly large). 

Note that despite the title, they only gave mouthwashes to participants with symptoms.

My guess is this is an overestimate of impact, because I expect an oral rinse to have a larger effect on saliva than on cellular levels. I wish they’d tested 4-6 hours later, after the virus had had some time to regrow.

Effect of 1% Povidone Iodine Mouthwash/Gargle, Nasal and Eye Drop in COVID-19 patient 

On one hand, this paper features significant ESL issues, missing data, terrible presentation of other data, and was published in a no-name journal. On the other hand, it had one of the best study designs and 30x the number of participants of other studies. I’d love to discard this paper but there aren’t better options.

We see an almost 90% reduction in testing positive on the third day. I suspect that overstates the results because it lowers salivary or nasal fluid viral load more than cellular load, so let’s look at outcomes:

90% reduction in hospitalization, 85% reduction in oxygen use, and  88% reduction in death. 

I was skeptical of these numbers at first, especially because they only tell you the total number of an age/sex group in the study, and the number of people in a demographic group with a bad outcome. Their percentages also don’t work out properly, making it hard to see the real impact. 

Luckily almost everyone in the control group was still PCR positive on day 3, which is almost like having a participant count. The number of control participants still sick on day 3 is indeed about half of every demographic. This doesn’t rule out trickier stuff like putting people at the higher end of their age band in the control group, but it’s a good deal better than that one paper where the youngest person in the control group was a year younger than the oldest person in the treatment group. 

The short-term effect of different chlorhexidine forms versus povidone iodine mouth rinse in minimizing the oral SARS-CoV-2 viral load: An open label randomized controlled clinical trial study

I originally ignored this paper, because it only reported Ct values and not outcomes or viral load.* However the previous two papers are from the same author and have shockingly concordant results, and I wanted a second opinion. 

[*Ct value = how often you have to run the PCR machine on a sample to get over a particular threshold. This corresponds to viral load but the relationship is complicated and variable. A higher Ct value means lower viral load]

The most important finding is that Ct went up by 3.3 (S genes) and 4.4 (E genes). 

N=12 so I’m not thrilled with this study, but pickings are slim. 

Side Effects, Or: Should I just gargle iodine all the time then?

Barring very specific circumstances, I wouldn’t. There are several issues that give me pause about long term continuous use.

Hyperthyroidism

Povidone iodine skin washes can cause hyperthyroidism in infants. Among adults, many studies found increases in Thyroid Stimulating Hormone (an indicator of issues but not itself terrible), but not T3 or T4 (directly casual to outcomes). These studies tend to be small and in some cases used the wrong statistical test that missed a long tail clearly visible in their plots, so I assume there exist people for whom this creates a clinically significant effect, especially after prolonged use.

I didn’t include this paper when calculating health benefits, because its control group was too different from its treatment group. But it’s still potentially useful for tracking side effects (although at n=12, it’s still pretty limited). It found a 50% increase in TSH after a week of treatment, but no change in T3 or T4. TSH returned to normal within 12 days of ceasing treatment. That’s not worrisome for healthy people on its own, but could easily reach worrisome with longer use or a vulnerable patient. 

Tissue damage could leave you worse off?

There is a long history of aggressive use of topical antimicrobial treatments leaving users worse off due to long term tissue irritation. This is why proper wound treatment changes every decade. That same study looked at this and found no increase in cellular irritation in the throat after six months of use. It’s possible they didn’t look hard enough, or they didn’t have sufficient sample size to catch the effect. It’s also possible the species that invented ghost peppers for fun has a throat surface built to handle irritation and iodine is too weak to hurt us

Oral microbiome damage could leave you worse off?

No one studied this at all, but it looks to me like an obvious failure point. I already use oral probiotics, but if I didn’t I would add them in while using iodine.

How to use

0.5% povidone iodine is sold under the brand name Betadine. You can also buy more concentrated povidone iodine and dilute it yourself. You might be tempted to use a higher concentration, but: 1. Remember the long tail of side-effects. 2. There’s some weird evidence that higher concentrations are less effective. I didn’t dig into this very weird claim but you probably should if you plan to try it. 

The Betadine bottle recommends gargling 10ml for 30s, 4x/day. The short term studies used 4-6x/day. Spacing that out is nontrivial attention tax, so when I was sick I just put the bottle on my bathroom sink and used it every time I used the bathroom. This probably comes out to more than 6x/day (especially when I’m sick and chugging fluids), but I also didn’t use a full 10ml and rarely made it to a full 30s, so hopefully it balanced out. 

More Data Needed

The state of existing knowledge around iodine gargles is poor. This is especially frustrating because I don’t think it should be that challenging to gather more. I’m toying with a plan to fix this, but will publish separately since it’s not specific to iodine. 

For financial support I would like to thank my Patreon supports and Lightspeed grants.

Nitric oxide for covid and other viral infections

Epistemic status: I spent about 5 hours looking into this, and the next day developed covid myself.  I did a bit more research plus all of the writing while sick. So in addition to my normal warning that I have no medical credentials, you should keep in mind that this knowledge may be cursed. 

ETA 4-30-24: In this post I used “nitric oxide spray” and “enovid” as synonyms. I’ve since learned this is incorrect, NO is one of several mechanisms Enovid uses. The other mechanisms weren’t mentioned in the papers I cite so it’s possible these are accurate for NO alone.

Introduction

Nitric Oxide Nasal Spray, sold under the brand name Enovid, is a reactive compound that kills viruses (and I suspect taxes your nasal tissue). It has recently been tested and marketed for treatment of covid. The protocol I found in papers was 2 sprays per nostril every 2-3 hours, after you develop symptoms. Enovid’s instructional pamphlets say twice per day, also after you get sick. This seems a little late to me.

I suspect the real power of NONS lies in use before you develop symptoms, ideally as close to exposure as possible. This is difficult because you don’t know when you would have gotten sick, and I suspect there are costs to indefinite use. I initially thought (and told people, as a tentative guess) that one round of 4 total sprays after a high risk event was a good trade off. After doing the math for this post, that intervention seems much less helpful to me, and picking the right length of post-exposure prophylaxis depends on equations for which we lack good numbers. I pulled some numbers out of my ass for this post, but you should not trust them. 

My guess is NONS is minimally useful once covid has reached the throat, unless you combine it with a separate disinfectant of the throat. I hope to write up a report on one such disinfectant soon, although TBH it’s not looking good. 

NONS can lead to false negatives on any test based on a nasal swab, because it breaks the relationship between nasal viral load and overall load.

How does it work?

First, nitric oxide is highly reactive, which makes it destructive to anything organic. Virions are fragile to this kind of direct attack, and certain immune cells will produce nitric oxide to kill bacteria, viruses, and your own diseased cells.

First-and-a-half, nitric oxide may alter the pH of your nose, and this effect may last well past the death of the original NO molecules. This was an aside in one paper, and I haven’t followed up on it. 

Second, nitric oxide is a signaling molecule within your body, probably including but definitely not limited to the immune system. I assume the immune system uses it as a signal because it serving a functional purposes. For the rest of body the selling point appears be that it crosses membranes easily but dies quickly, making it useful when the body wants the signal to fade quickly. Viagra works by indirectly increasing your body’s synthesis of nitric oxide. 

How well does it work?

Good question, and it depends a lot on how you use it.

My best guess is that a single application (2 sprays in each nostril) of Envoid ~halves the viral load in your nose. Covid doubles in 36 hours, so that’s how much extra time you’ve bought your immune system to ramp up defenses. If you follow the more aggressive protocols in the literature and apply that treatment 6 times per day, you wipe out 95% of covid in the nose. I will attempt to translate this an efficacy estimate in that mythical future, but in the meantime siderea has a write-up on why reducing viral load is valuable even if you can’t destroy it entirely

Sometimes you will see very impressive graphs for Enovid’s impact; these are inevitably looking at the results of nasal swabs. Since even in the best case scenario NONS doesn’t affect spread once an infection has reached the throat, this doesn’t feel very relevant to me. 

Sometimes you will see very unimpressive graphs, from the rare studies that looked at transmission or symptoms. These effects are so weak, in such small studies, that I consider them essentially a null result.

…Except that these studies all started treatment days after symptoms emerged. In one case it was a minimum of 4 days. Another said “0-3 days” after symptoms, but since it takes time to see a doctor and be recruited into a study I expect the average to be on the high end of that. Additionally, both studies showed a downward slope in infection in both treatment and control groups. This is a big deal because I expect the largest effect to come if NONS is used before exponential growth really takes off. If they’re seeing a decline in viral load in their control arm, they either administered treatment too late or their placebo isn’t. 

[I think this reasoning holds even if immune overreaction is part of the problems with long covid. Long covid is correlated with severity of initial infection.]

To figure the impact of prophylactic use, I’m going to have to get, uh, speculative. Before I do that, let me dig into exactly what the data says. 

Effect size on nasal viral load

This has very solid data: even under the unfavorable circumstances of a strong infection, a day of usage drops viral load by 90-95%

Paper 1 says 95% reduction in one day, 99% in two. They took samples from the nose and throat but don’t clarify which location that applies to. If I had the energy I’d be very angry about that right now. 

(Their placebo was a saline spray, which other people claim is an antimicrobial in its own right, so this may understate the effect)

Paper 2 finds an adjusted 93-98% decline after 1 day’s use of NONS. 

Effect on symptoms/transmission, as measured by poorly designed studies

Paper 1 did track time to cure, but with a 40% response rate on a sample size of 40 in the treatment arm I can’t bring myself to care.

Paper 2 reported a couple of metrics. One is “Time to cure (as defined by PCR results)” which is still worthless because it’s still using a nasal swab. Another is clinician-assessed improvement; this effect seemed real but not huge. 

They also checked for spread to close contacts, but not very well. Contacts had to take the initiative to get tested themselves, and AFAICT they didn’t establish if they were infected before or after treatment started.  You can try to factor that out by only looking at the last day of recorded data, but the difference appears to start on day 1 of treatment, when there absolutely shouldn’t be an effect. 

Other Diseases

NONS has been studied against other infections and I fully meant to look at that data. Now that I have actual covid I consider it kind of a race to get this post out before I’m too tired, so this will come later if at all.

My wild ass guess of impact

What does a single dose do? I did a very stupid model assuming six doses over 24 hours each having the same proportionate effect, and found that halving viral load with each application was a perfect match with the data. I expect the first dose of the day has a larger effect and each one is a little less effective until you sleep and the virus has some time to marshal forces, but barring better data I’m going to treat Enovid as rolling back one doubling. 

[I want to emphasize I didn’t massage this to make the math easier. I tried .9 in my naive spreadsheet knowing it wouldn’t work, and then tried 0.5 to find it perfectly matched the data]

If my covid infection starts in the nose and I take a full course of treatment immediately after exposure, <10% chance I get sick. But that’s unachievable without constant use, which I think is a bad idea (see below).

What if you’re infected, but only in your nose? It’s a 95% reduction per day. It’s anyone’s guess how much that reduces the chance of spread to your throat; I’d say 95% is the upper bound, and am very arbitrarily setting 50% as the lower bound for the first day (this time I am trying to make the math easier). But you’re also reducing the cumulative load; on day three (after two days of treatment), your viral load is 99% lower than it would otherwise be, before you take any new doses.

I suspect the real killer app here is combining Enovid with a throat disinfectant, and am prioritizing a review of at least one throat disinfectant in a future post. 

Can I get this effect for free, without the painful stinging or logistical hassle of a nasal spray?

Maybe. Your nose already naturally produces nitric oxide, and you can increase this by 15x by humming. I haven’t been able to find the dosage of a single spray of Enovid to compare, but humming doesn’t sting so I assume it’s a lot less. On the other hand, you can hum more often than six times per day. On the third hand, I can’t tell if humming causes you to produce more NO or just release it faster, in which case chronic humming might deplete your stores. 

A quick search found multiple published articles suggesting this, but none actually studying it. The cynic in me says this is because there’s no money in it, but this study would take pennies to run and be so high impact if it worked that I suspect this is less promising than it seems. 

Update 2024-10-01: No.

Thank you to Michael Tontchev on twitter for pointing me towards humming.

Should I just use this all the time?

I don’t regularly use Envoid, despite having a shit immune system. The history of treatments like this is that long term use causes more problems than it solves. They dry out mucous membranes, or kill your own immune cells. I think the rest of you should seriously consider developing a humming habit; alas I have nerve damage in my jaw that makes vibration painful so not an option for me. 

I do think there’s a case for prophylactic use during high risk situations like conferences or taking care of a sick loved one. 

Where can I buy Enovid?

Amazon has it, but at $100/bottle it’s quite expensive. You can get it from other websites for half the price but longer shipping times; my friend used israelpharm.com and confirms he got his shipment.