Betadine oral rinses for covid and other viral infections

Before we get started, this is your quarterly reminder that I have no medical credentials and my highest academic credential is a BA in a different part of biology (with a double major in computer science). In a world with a functional medical system no one would listen to me. 

Tl;dr povidone iodine probably reduces viral load when used in the mouth or nose, with corresponding decreases in symptoms and infectivity. The effect size could be as high as 90% for prophylactic use (and as low as 0% when used in late illness), but is probably much smaller. There is a long tail of side-effects. No study I read reported side effects at clinically significant levels, but I don’t think they looked hard enough. There are other gargle formulas that may have similar benefits without the risk of side effects, which are in my queue to research.

Benefits

Math

One paper found a 90% decrease in salivary viral load after mouthwash use (which probably overestimates the effect). Another found a 90% reduction in bad outcomes, with treatment (in mouth, nose, and eyes) starting soon after diagnosis. I suspect both of these are overestimates but 1. 90% reduction is a fantastic upper bound to have 2. Neither of these looked at prophylactic use. A third study found a significant reduction in viral DNA after usage, but did not quantify that by viral load or outcomes. 

I feel like if povidone iodine was actually that good we’d have heard about it before. OTOH mouthwash formulations are barely available in the US, and most of these studies were in Asia, so maybe it went to fixation there years ago and the west is just catching up. 

So I’m going to call this 9-45% reduction in illness timeXintensity when used after symptom onset. Before onset ought to be higher, my wild ass guess is up to 90%. 

One reason I think earlier use is better is that, at least with covid, most of the real damage happens when the virus reaches the lungs. If iodine gargles can form a firewall that prevents an upper respiratory infection from becoming a lower respiratory infection, you’ve prevented most (although not all) of the worst outcomes.

Papers

I livetweeted every paper I read, collected here. I don’t want to brag, but those tweets were very popular among ladies with large boobs and 10 numbers in their twitter handles. So if that’s your type you should definitely check out those threads. Everyone else will probably find them tedious, so I’m going to summarize the most relevant papers here.

Estimating salivary carriage of severe acute respiratory syndrome coronavirus 2 in nonsymptomatic people and efficacy of mouthrinse in reducing viral load: A randomized controlled trial

This study had participants rinse their mouth with one of four mouthwashes, and compared the pre-mouthwash salivary viral load with the viral load 15 and 45 minutes later. The overall effect was very strong: 3 of the washes had a 90% total reduction, and the loser of the bunch still had a 70% reduction (error bars fairly large). 

Note that despite the title, they only gave mouthwashes to participants with symptoms.

My guess is this is an overestimate of impact, because I expect an oral rinse to have a larger effect on saliva than on cellular levels. I wish they’d tested 4-6 hours later, after the virus had had some time to regrow.

Effect of 1% Povidone Iodine Mouthwash/Gargle, Nasal and Eye Drop in COVID-19 patient 

On one hand, this paper features significant ESL issues, missing data, terrible presentation of other data, and was published in a no-name journal. On the other hand, it had one of the best study designs and 30x the number of participants of other studies. I’d love to discard this paper but there aren’t better options.

We see an almost 90% reduction in testing positive on the third day. I suspect that overstates the results because it lowers salivary or nasal fluid viral load more than cellular load, so let’s look at outcomes:

90% reduction in hospitalization, 85% reduction in oxygen use, and  88% reduction in death. 

I was skeptical of these numbers at first, especially because they only tell you the total number of an age/sex group in the study, and the number of people in a demographic group with a bad outcome. Their percentages also don’t work out properly, making it hard to see the real impact. 

Luckily almost everyone in the control group was still PCR positive on day 3, which is almost like having a participant count. The number of control participants still sick on day 3 is indeed about half of every demographic. This doesn’t rule out trickier stuff like putting people at the higher end of their age band in the control group, but it’s a good deal better than that one paper where the youngest person in the control group was a year younger than the oldest person in the treatment group. 

The short-term effect of different chlorhexidine forms versus povidone iodine mouth rinse in minimizing the oral SARS-CoV-2 viral load: An open label randomized controlled clinical trial study

I originally ignored this paper, because it only reported Ct values and not outcomes or viral load.* However the previous two papers are from the same author and have shockingly concordant results, and I wanted a second opinion. 

[*Ct value = how often you have to run the PCR machine on a sample to get over a particular threshold. This corresponds to viral load but the relationship is complicated and variable. A higher Ct value means lower viral load]

The most important finding is that Ct went up by 3.3 (S genes) and 4.4 (E genes). 

N=12 so I’m not thrilled with this study, but pickings are slim. 

Side Effects, Or: Should I just gargle iodine all the time then?

Barring very specific circumstances, I wouldn’t. There are several issues that give me pause about long term continuous use.

Hyperthyroidism

Povidone iodine skin washes can cause hyperthyroidism in infants. Among adults, many studies found increases in Thyroid Stimulating Hormone (an indicator of issues but not itself terrible), but not T3 or T4 (directly casual to outcomes). These studies tend to be small and in some cases used the wrong statistical test that missed a long tail clearly visible in their plots, so I assume there exist people for whom this creates a clinically significant effect, especially after prolonged use.

I didn’t include this paper when calculating health benefits, because its control group was too different from its treatment group. But it’s still potentially useful for tracking side effects (although at n=12, it’s still pretty limited). It found a 50% increase in TSH after a week of treatment, but no change in T3 or T4. TSH returned to normal within 12 days of ceasing treatment. That’s not worrisome for healthy people on its own, but could easily reach worrisome with longer use or a vulnerable patient. 

Tissue damage could leave you worse off?

There is a long history of aggressive use of topical antimicrobial treatments leaving users worse off due to long term tissue irritation. This is why proper wound treatment changes every decade. That same study looked at this and found no increase in cellular irritation in the throat after six months of use. It’s possible they didn’t look hard enough, or they didn’t have sufficient sample size to catch the effect. It’s also possible the species that invented ghost peppers for fun has a throat surface built to handle irritation and iodine is too weak to hurt us

Oral microbiome damage could leave you worse off?

No one studied this at all, but it looks to me like an obvious failure point. I already use oral probiotics, but if I didn’t I would add them in while using iodine.

How to use

0.5% povidone iodine is sold under the brand name Betadine. You can also buy more concentrated povidone iodine and dilute it yourself. You might be tempted to use a higher concentration, but: 1. Remember the long tail of side-effects. 2. There’s some weird evidence that higher concentrations are less effective. I didn’t dig into this very weird claim but you probably should if you plan to try it. 

The Betadine bottle recommends gargling 10ml for 30s, 4x/day. The short term studies used 4-6x/day. Spacing that out is nontrivial attention tax, so when I was sick I just put the bottle on my bathroom sink and used it every time I used the bathroom. This probably comes out to more than 6x/day (especially when I’m sick and chugging fluids), but I also didn’t use a full 10ml and rarely made it to a full 30s, so hopefully it balanced out. 

More Data Needed

The state of existing knowledge around iodine gargles is poor. This is especially frustrating because I don’t think it should be that challenging to gather more. I’m toying with a plan to fix this, but will publish separately since it’s not specific to iodine. 

For financial support I would like to thank my Patreon supports and Lightspeed grants.

Nitric oxide for covid and other viral infections

Epistemic status: I spent about 5 hours looking into this, and the next day developed covid myself.  I did a bit more research plus all of the writing while sick. So in addition to my normal warning that I have no medical credentials, you should keep in mind that this knowledge may be cursed. 

Introduction

Nitric Oxide Nasal Spray, sold under the brand name Enovid, is a reactive compound that kills viruses (and I suspect taxes your nasal tissue). It has recently been tested and marketed for treatment of covid. The protocol I found in papers was 2 sprays per nostril every 2-3 hours, after you develop symptoms. Enovid’s instructional pamphlets say twice per day, also after you get sick. This seems a little late to me.

I suspect the real power of NONS lies in use before you develop symptoms, ideally as close to exposure as possible. This is difficult because you don’t know when you would have gotten sick, and I suspect there are costs to indefinite use (see TODO section). I initially thought (and told people, as a tentative guess) that one round of 4 total sprays after a high risk event was a good trade off. After doing the math for this post, that intervention seems much less helpful to me, and picking the right length of post-exposure prophylaxis depends on equations for which we lack good numbers. I pulled some numbers out of my ass for this post, but you should not trust them. 

My guess is NONS is minimally useful once covid has reached the throat, unless you combine it with a separate disinfectant of the throat. I hope to write up a report on one such disinfectant soon, although TBH it’s not looking good. 

NONS can lead to false negatives on any test based on a nasal swab, because it breaks the relationship between nasal viral load and overall load.

How does it work?

First, nitric oxide is highly reactive, which makes it destructive to anything organic. Virions are fragile to this kind of direct attack, and certain immune cells will produce nitric oxide to kill bacteria, viruses, and your own diseased cells.

First-and-a-half, nitric oxide may alter the pH of your nose, and this effect may last well past the death of the original NO molecules. This was an aside in one paper, and I haven’t followed up on it. 

Second, nitric oxide is a signaling molecule within your body, probably including but definitely not limited to the immune system. I assume the immune system uses it as a signal because it serving a functional purposes. For the rest of body the selling point appears be that it crosses membranes easily but dies quickly, making it useful when the body wants the signal to fade quickly. Viagra works by indirectly increasing your body’s synthesis of nitric oxide. 

How well does it work?

Good question, and it depends a lot on how you use it.

My best guess is that a single application (2 sprays in each nostril) of Envoid ~halves the viral load in your nose. Covid doubles in 36 hours, so that’s how much extra time you’ve bought your immune system to ramp up defenses. If you follow the more aggressive protocols in the literature and apply that treatment 6 times per day, you wipe out 95% of covid in the nose. I will attempt to translate this an efficacy estimate in that mythical future, but in the meantime siderea has a write-up on why reducing viral load is valuable even if you can’t destroy it entirely

Sometimes you will see very impressive graphs for Enovid’s impact; these are inevitably looking at the results of nasal swabs. Since even in the best case scenario NONS doesn’t affect spread once an infection has reached the throat, this doesn’t feel very relevant to me. 

Sometimes you will see very unimpressive graphs, from the rare studies that looked at transmission or symptoms. These effects are so weak, in such small studies, that I consider them essentially a null result.

…Except that these studies all started treatment days after symptoms emerged. In one case it was a minimum of 4 days. Another said “0-3 days” after symptoms, but since it takes time to see a doctor and be recruited into a study I expect the average to be on the high end of that. Additionally, both studies showed a downward slope in infection in both treatment and control groups. This is a big deal because I expect the largest effect to come if NONS is used before exponential growth really takes off. If they’re seeing a decline in viral load in their control arm, they either administered treatment too late or their placebo isn’t. 

[I think this reasoning holds even if immune overreaction is part of the problems with long covid. Long covid is correlated with severity of initial infection.]

To figure the impact of prophylactic use, I’m going to have to get, uh, speculative. Before I do that, let me dig into exactly what the data says. 

Effect size on nasal viral load

This has very solid data: even under the unfavorable circumstances of a strong infection, a day of usage drops viral load by 90-95%

Paper 1 says 95% reduction in one day, 99% in two. They took samples from the nose and throat but don’t clarify which location that applies to. If I had the energy I’d be very angry about that right now. 

(Their placebo was a saline spray, which other people claim is an antimicrobial in its own right, so this may understate the effect)

Paper 2 finds an adjusted 93-98% decline after 1 day’s use of NONS. 

Effect on symptoms/transmission, as measured by poorly designed studies

Paper 1 did track time to cure, but with a 40% response rate on a sample size of 40 in the treatment arm I can’t bring myself to care.

Paper 2 reported a couple of metrics. One is “Time to cure (as defined by PCR results)” which is still worthless because it’s still using a nasal swab. Another is clinician-assessed improvement; this effect seemed real but not huge. 

They also checked for spread to close contacts, but not very well. Contacts had to take the initiative to get tested themselves, and AFAICT they didn’t establish if they were infected before or after treatment started.  You can try to factor that out by only looking at the last day of recorded data, but the difference appears to start on day 1 of treatment, when there absolutely shouldn’t be an effect. 

Other Diseases

NONS has been studied against other infections and I fully meant to look at that data. Now that I have actual covid I consider it kind of a race to get this post out before I’m too tired, so this will come later if at all.

My wild ass guess of impact

What does a single dose do? I did a very stupid model assuming six doses over 24 hours each having the same proportionate effect, and found that halving viral load with each application was a perfect match with the data. I expect the first dose of the day has a larger effect and each one is a little less effective until you sleep and the virus has some time to marshal forces, but barring better data I’m going to treat Enovid as rolling back one doubling. 

[I want to emphasize I didn’t massage this to make the math easier. I tried .9 in my naive spreadsheet knowing it wouldn’t work, and then tried 0.5 to find it perfectly matched the data]

If my covid infection starts in the nose and I take a full course of treatment immediately after exposure, <10% chance I get sick. But that’s unachievable without constant use, which I think is a bad idea (see below).

What if you’re infected, but only in your nose? It’s a 95% reduction per day. It’s anyone’s guess how much that reduces the chance of spread to your throat; I’d say 95% is the upper bound, and am very arbitrarily setting 50% as the lower bound for the first day (this time I am trying to make the math easier). But you’re also reducing the cumulative load; on day three (after two days of treatment), your viral load is 99% lower than it would otherwise be, before you take any new doses.

I suspect the real killer app here is combining Enovid with a throat disinfectant, and am prioritizing a review of at least one throat disinfectant in a future post. 

Can I get this effect for free, without the painful stinging or logistical hassle of a nasal spray?

Maybe. Your nose already naturally produces nitric oxide, and you can increase this by 15x by humming. I haven’t been able to find the dosage of a single spray of Enovid to compare, but humming doesn’t sting so I assume it’s a lot less. On the other hand, you can hum more often than six times per day. On the third hand, I can’t tell if humming causes you to produce more NO or just release it faster, in which case chronic humming might deplete your stores. 

A quick search found multiple published articles suggesting this, but none actually studying it. The cynic in me says this is because there’s no money in it, but this study would take pennies to run and be so high impact if it worked that I suspect this is less promising than it seems. 

Thank you to Michael Tontchev on twitter for pointing me towards humming.

Should I just use this all the time?

I don’t regularly use Envoid, despite having a shit immune system. The history of treatments like this is that long term use causes more problems than it solves. They dry out mucous membranes, or kill your own immune cells. I think the rest of you should seriously consider developing a humming habit; alas I have nerve damage in my jaw that makes vibration painful so not an option for me. 

I do think there’s a case for prophylactic use during high risk situations like conferences or taking care of a sick loved one. 

Where can I buy Enovid?

Amazon has it, but at $100/bottle it’s quite expensive. You can get it from other websites for half the price but longer shipping times; my friend used israelpharm.com and confirms he got his shipment. 

Inositol Non-Results

Three months ago I suggested people consider inositol for treating combined vague mood issues and vague stomach issues. I knew a few people who’d really benefited from it, and when one talked about it on his popular Twitter account several more people popped up thanking him for the suggestion, because it fixed their lives too. But those reports didn’t come with a denominator, which made it hard to estimate the success rate; I was hoping mentioning it on my blog and doing a formal follow-up to capture the non-responders would give a more accurate number.

Unfortunately, I didn’t get enough people to do anything useful. I received 7 responses, of which 3 didn’t have digestive issues and thus weren’t really the target. The low response rate might be a consequence of giving the wrong link in the original follow-up post, or maybe it just wasn’t that interesting. I’m reporting the results anyway out of a sense of civic virtue. 

Of those 4 remaining responses:

  • 2 rated it exactly 5 out of 10 (neutral)
  • 1 rated it as 6, which was not strong enough for them to try it a third time.
  • 1 rated it as 3- not bad enough that they spontaneously noticed a problem, but they did detailed mood tracking and the linear regression was clearly bad. 

That response rate isn’t really low enough to prove anything except that anything with a real effect can hurt you, and the value of detailed data. So for now we just have David’s estimate that 5% of people he inspired to take inositol benefited from it. 

Follow-up survey: inositol

Two months ago I wrote about inositol as a treatment that occasionally works for anxiety and depression, especially when the user also has weird digestive issues (not medical advice, I am not a doctor). If that inspired you to try inositol I would love if you would fill out this 5-7 question survey about your experience. This follow-up data helps other people considering inositol, and is broadly helpful to me in figuring out what luck based medicine looks like. 

And to the four people who already filled this out: gold star for epistemic virtue. 

The survey doesn’t allow for a lot of detail, which I know is painful for some people (it’s me. I’m people). If you would like to share more, feel free to write up as much as you’d like in a comment here, or share a link detailing your experience.

Elsewhere in luck based medicine: it was a dude in my survey, but I met a few more people who really love the Apollo Neuro. They are all the kind of people who already know what “somatically aware” or “embodiment” mean, so this is some support for my theory that that’s a prerequisite. It’s still an open question if you need that background for the Neuro to be beneficial, to notice it’s beneficial, or to stick with it long enough that it has time to be beneficial. 

The Apollo app has gotten even worse since last time I wrote. Every time I open it it bugs me to enable notifications, a permission it absolutely does not need. 

Grant Making and Grand Narratives

Another inside baseball EA post

The Lightspeed application asks:  “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”

LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”). 

I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail.  But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I  believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.

I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).

I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.

My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start. 
 

More on the costs of the question

Pushes away the most motivated people

Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad. 

[*Although there are also downsides to organizers with sufficiently bad epistemics.]

Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
 

Vulnerable to grift

You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems. 

 

Punishes underconfidence

Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated. 

Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
 

Corrupts epistemics

Not that much. But I think it’s pretty bad if people are forced to choose between “play the game of exaggerating impact” and “go unfunded”. Even if the game is in fact learnable, it’s a bad use of their time and weakens the barriers to lying in the future. 

Pushes projects to grow beyond their ideal scope

Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.  

This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
 

Rewards cultural knowledge independent of merit

There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals. 

Brainstorming fixes

I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of. 
 

  • Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption. 
  • Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
    • E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
    • This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
    • I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope. 
    • Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed. 
  • Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
    • My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
  • Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
    • This is tricky because grants don’t necessarily mean a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable. 
    • E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism. 
  • Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
  • When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are.  You can have an optional question asking for the effects of meeting the goal.
    • I like the word “effects” more than “impact”, which is a pretty loaded term within EA and x-risk. 
  • A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that. 
  • Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with. 
  • [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
  • [Your idea here]



 

I hope the forms change to explicitly encourage things like the above list, but  I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).   

Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I’m also very interested in why you like the current forms, and what constraints shaped them.

Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am. 

Truthseeking when your disagreements lie in moral philosophy

[Status: latest entry in a longrunning series]

My last post on truthseeking in EA vegan advocacy got a lot of comments, but there’s one in particular I want to highlight as highly epistemically cooperative. I have two motivations for this:

  • having just spotlighted some of the most epistemically uncooperative parts of a movement, it feels just to highlight good ones
  • I think some people will find it surprising that I call this comment highly truthseeking and epistemically cooperative, which makes it useful for clarifying how I use those words. 

In a tangential comment thread, I asked Tristan Williams why he thought veganism was more emotionally sustainable than reducitarianism. He said:


Yeah sure. I would need a full post to explain myself, but basically I think that what seems to be really important when going vegan is standing in a certain sort of loving relationship to animals, one that isn’t grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute

I guess the first time I thought about this was with my university EA group. We had a couple of hardcore utilitarians, and one of them brought up an interesting idea one night. He was a vegan, but he’d been offered some mac and cheese, and in similar thinking to above (that dairy generally involves less suffering than eggs or chicken for ex) he wondered if it might actually be better to take the mac and donate the money he would have spent to an animal welfare org. And when he roughed up the math, sure enough, taking the mac and donating was somewhat significantly the better option.  

But he didn’t do it, nor do I think he changed how he acted in the future. Why? I think it’s really hard to draw a line in the sand that isn’t veganism that stays stable over time. For those who’ve reverted, I’ve seen time and again a slow path back, one where it starts with the less bad items, cheese is quite frequent, and then naturally over time one thing after another is added to the point that most wind up in some sort of reduceitarian state where they’re maybe 80% back to normal (I also want to note here, I’m so glad for any change, and I cast no stones at anyone trying their best to change). And I guess maybe at some point it stops being a moral thing, or becomes some really watered down moral thing like how much people consider the environment when booking a plane ticket. 

I don’t know if this helps make it clear, but it’s like how most people feel about harm to younger kids. When it comes to just about any serious harm to younger kids, people are generally against it, like super against it, a feeling of deep caring that to me seems to be one of the strongest sentiments shared by humans universally. People will give you some reasons for this i.e. “they are helpless and we are in a position of responsibility to help them” but really it seems to ground pretty quickly in a sentiment of “it’s just bad”. 

To have this sort of love, this commitment to preventing suffering, with animals to me means pretty much just drawing the line at sentient beings and trying to cultivate a basic sense that they matter and that “it’s just bad” to eat them. Sure, I’m not sure what to do about insects, and wild animal welfare is tricky, so it’s not nearly as easy as I’m making it seem. And it’s not that I don’t want to have any idea of some of the numbers and research behind it all, I know I need to stay up to date on debates on sentience, and I know that I reference relative measures of harm often when I’m trying to guide non-veg people away from the worst harms. But what I’d love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what’s better or worse and just casts it all out as beyond the pale. 

And to try to return to earlier, I guess I see taking this sort of position as likely to extend people’s time spent doing veg-related diets, and I think it’s just a lot trickier to have this sort of relationship when you are doing some sort of utilitarian calculus of what is and isn’t above the bar for you (again, much love to these people, something is always so much better than nothing). This is largely just a theory, I don’t have much to back it up, and it would seem to explain some cases of reversion I’ve seen but certainly not all, and I also feel like this is a bit sloppy because I’d really need a post to get at this hard to describe feeling I have. But hopefully this helps explain the viewpoint a bit better, happy to answer any questions 🙂

It’s true that this comment doesn’t use citations or really many objective facts. But what it does have is: 

  • A clear description of what the author believes 
  • Clear identifiers of the author’s cruxes for those beliefs
  • It doesn’t spell out every possible argument but does leave hooks, so if I’m confused it’s easy to ask clarifying questions
  • Disclaimers against common potential misinterpretations. 
  • Forthright description of its own limits
  • Proper hedging and sourcing on the factual claims it does make

This is one form of peak epistemic cooperation. Obviously it’s not the only form, sometimes I want facts with citations and such, but usually only after philosophical issues like this one have been resolved. Sometimes peak truthseeking looks like sincerely sharing your beliefs in ways that invite other people to understand them, which is different than justifying them. And I’d like to see more of that, everywhere.

PS. I know I said the next post would be talking about epistemics in the broader effective altruism community. Even as I wrote that sentence I thought “Are you sure? That’s been your next post for three or four posts now, writing this feels risky”, and I thought “well I really want the next post out before EAG Boston and that doesn’t leave time for any more diversions, we’re already halfway done and caveating “next” would be such a distraction…”. Unsurprisingly I realized the post was less than halfway done and I can’t get the best version done in time for EAG Boston, at which point I might as well write it at a leisurely pace

PPS. Tristan saw a draft of this post before publishing and had some power to veto or edit it. Normally I’d worry that doing so would introduce some bias, but given the circumstances it felt like the best option. I don’t think anyone can accuse me of being unwilling to criticize EA vegan advocacy epistemics, and I was worried that hearing “hey I want to quote your pro-veganism comment in full in a post, don’t worry it will be complimentary, no I can’t show you the post you might bias it” would be stressful.

EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem

Introduction

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its reference category, and unjustified in that it is far from meeting its own standards. We’ve already seen dire consequences of the inability to detect bad actors who deflect investigation into potential problems, but by its nature you can never be sure you’ve found all the damage done by epistemic obfuscation because the point is to be self-cloaking. 

My concern here is for the underlying dynamics of  EA’s weak epistemic immune system, not any one instance. But we can’t analyze the problem without real examples, so individual instances need to be talked about. Worse, the examples that are easiest to understand are almost by definition the smallest problems, which makes any scapegoating extra unfair. So don’t.

This post focuses on a single example: vegan advocacy, especially around nutrition. I believe vegan advocacy as a cause has both actively lied and raised the cost for truthseeking, because they were afraid of the consequences of honest investigations. Occasionally there’s a consciously bad actor I can just point to, but mostly this is an emergent phenomenon from people who mean well, and have done good work in other areas. That’s why scapegoating won’t solve the problem: we need something systemic. 

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

Definitions

I picked the words “vegan advocacy” really specifically. “Vegan” sometimes refers to advocacy and sometimes to just a plant-exclusive diet, so I added “advocacy” to make it clear.

I chose “advocacy” over “advocates” for most statements because this is a problem with the system. Some vegan advocates are net truthseeking and I hate to impugn them. Others would like to be epistemically virtuous but end up doing harm due to being embedded in an epistemically uncooperative system. Very few people are sitting on a throne of plant-based imitation skulls twirling their mustache thinking about how they’ll fuck up the epistemic commons today. 

When I call for actions I say “advocates” and not “advocacy” because actions are taken by people, even if none of them bear much individual responsibility for the problem. 

I specify “EA vegan advocacy” and not just “vegan advocacy” not because I think mainstream vegan advocacy is better, but because 1. I don’t have time to go after every wrong advocacy group in the world. 2. Advocates within Effective Altruism opted into a higher standard. EA has a right and responsibility to maintain the standards of truth it advocates, even if the rest of the world is too far gone to worry about. 

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

How EA vegan advocacy has hindered truthseeking

EA vegan advocacy has both pushed falsehoods and punished people for investigating questions it doesn’t like. It manages this even for positions that 90%+ of effective altruism and the rest of the world agree with, like “veganism is a constraint”. I don’t believe its arguments convince anyone directly, but end up having a big impact by making inconvenient beliefs too costly to discuss. This means new entrants to EA are denied half of the argument, and harm themselves due to ignorance.

This section outlines the techniques I’m best able to name and demonstrate. For each technique I’ve included examples. Comments on my own posts are heavily overrepresented because they’re the easiest to find; “go searching through posts on veganism to find the worst examples” didn’t feel like good practice. I did my best to quote and summarize accurately, although I made no attempt to use a representative sample. I think this is fair because a lot of the problem lies in the fact that good comments don’t cancel out bad, especially when the good comments are made in parallel rather than directly arguing with the bad. I’ve linked to the source of every quote and screen shot, so you can (and should) decide for yourself. I’ve also created a list of all of my own posts I’m drawing from, so you can get a holistic view. 

My posts:

I should note I quote some commenters and even a few individual comments in more than one section, because they exhibit more than one problem. But if I refer to the same comment multiple times in a row I usually only link to it once, to avoid implying more sources than I have. 

My posts were posted on my blog, LessWrong, and EAForum. In practice the comments I drew from came from LessWrong (white background) and EAForum (black background).  I tried to go through those posts and remove all my votes on comments (except the automatic vote for my own comments) so that you could get an honest view of how the community voted without my thumb on the scale, but I’ve probably missed some, especially on older posts. On the main posts, which received a lot of traffic, I stuck to well-upvoted comments, but I included some low (but still positive) karma comments from unpopular posts. 

The goal here is to make these anti-truthseeking techniques legible for discussion, not develop complicated ways to say “I don’t like this”, so when available I’ve included counter examples. These are comments that look similar to the ones I’m complaining about, but are fine or at least not suffering from the particular flaw in that section. In doing this I hope to keep the techniques’ definitions narrow.

Active suppression of inconvenient questions

A small but loud subset of vegan advocacy will say outright you shouldn’t say true things, because it leads to outcomes they dislike. This accusation is even harsher than “not truthseeking”, and would normally be very hard to prove. If I say “you’re saying that because you care more about creating vegans than the health of those you create”, and they say “no I’m not”, I don’t really have a come back. I can demonstrate that they’re wrong, but not their motivation. Luckily, a few people said the quiet part out loud. 

Commenter Martin Soto pushed back very hard on my first nutrition testing study. Finally I asked him outright if he thought it was okay to share true information about vegan nutrition. His response was quite thoughtful and long, so you should really go read the whole thing, but let me share two quotes

He goes on to say:

And in a later comment

EDIT 2023-10-03: Martin disputes my summary of his comments. I think it’s good practice to link to disputes like this, even though I stand by my summary. I also want to give a heads-up that I see his comments in the dispute thread as continuing the patterns I describe (which makes that thread a tax on the reader). If you want to dig into this, I strongly suggest you first read his original comments and come up with your own summary, so you can compare that to each of ours.

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition. There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”. Most people flinch away from explicit trade-offs like that, and I appreciate that he did them and own the conclusion. But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

EDIT 2023-10-04: . Using Faunalytics numbers for self-reported health issues and improvements after quitting veg*nism, I calculated that 20% of veg*ns develop health issues. This number is sensitive to your assumptions; I consider 20% conservative but it could be an overestimate. I encourage you to read the whole post and play with my model, and of course read the original work.  

Most people aren’t nearly this upfront. They will go through the motions of calling an idea incorrect before emphasizing how it will lead to outcomes they dislike. But the net effect is a suppression of the exploration of ideas they find inconvenient. 

This post on Facebook is a good example. Normally I would consider facebook posts out of bounds, especially ones this old (over five years). Facebook is a casual space and I want people to be able to explore ideas without being worried that they’re creating a permanent record that will be used against them.  In this case I felt that because the post was permissioned to public and a considered statement (rather than an off the cuff reply), the truth value outweighed the chilling effect. But because it’s so old and I don’t know how the author’s current opinion, I’m leaving out their name and not linking to the post. 

The author is midlist EA- I’d heard of them for other reasons, but they’re certainly not EA-famous. 

There are posts very similar to this one I would have been fine with, maybe even joyful about. You could present evidence against the claims that X is harmful, or push people to verify things before repeating them, or suggest we reserve the word poison for actual kill-you-dead molecules and not complicated compound constructions with many good parts and only weak evidence of mild long-term negative effects. But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you’d expect if the health claims were true but inconvenient. I don’t know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

A subtler version comes from the AHS-2 post. At the time of this comment the author, Rockwell, described herself as the leader of EA NYC and an advisor to philanthropists on animal suffering, so this isn’t some rando having a feeling. This person has some authority.

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient. And if they’d written that counter-argument they promised I’d be putting this in the counter-examples section. But it’s been three months and they have not written anything where I can find it, nor responded to my inquiries. So even if literal claim were correct, she’s using a technique whose efficacy is independent of truth. 

Over on the Change My Mind post the top comment says that vegan advocacy is fine because it’s no worse than fast food or breakfast cereal ads

I’m surprised someone would make this comment. But what really shocks me is the complete lack of pushback from other vegan advocates. If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them. 

Counter-Examples

This post on EAForum came out while I was finishing this post. The author asks if they should abstain from giving bad reviews to vegan restaurants, because it might lead to more animal consumption- which would be a central example of my complaint. But the comments are overwhelmingly “no, there’s not even a good consequentialist argument for that”, and the author appears to be taking that to heart. So from my perspective this is a success story.

Ignore the arguments people are actually making

I’ve experienced this pattern way too often.

Me: goes out of my way to say not-X in a post
Comment: how dare you say X! X is so wrong!
Me: here’s where I explicitly say not-X.
*crickets*

This is by no means unique to posts about veganism. “They’re yelling at me for an argument I didn’t make” is a common complaint of mine. But it happens so often, and so explicitly, in the vegan nutrition posts. Let me give some examples.

My post:

Commenter:

My post:

Commenters:

My post:

Commenter: 

My post: 

Commenter:

My post:

Commenter:

You might be thinking “well those posts were very long and honestly kind of boring, it would be unreasonable to expect people to read everything”. But the length and precision are themselves a response to people arguing with positions I don’t hold (and failing to update when I clarify). The only things I can do are spell out all of my beliefs or not spell out all of my beliefs, and either way ends with comments arguing against views I don’t have. 

Frame control/strong implications not defended/fuzziness

This is the hardest one to describe. Sometimes people say things, and I disagree, and we can hope to clarify that disagreement. But sometimes people say things and responding is like nailing jello to a wall. Their claims aren’t explicit, or they’re individually explicit but aren’t internally consistent, or play games with definitions. They “counter” statements in ways that might score a point in debate club but don’t address the actual concern in context. 

One example is the top-voted comment on LW on Change My Mind

Over a very long exchange I attempt to nail down his position: 

  • Does he think micronutrient deficiencies don’t exist? No, he agrees they do.
  • Does he think that they can’t cause health issues? No, he agrees they do.
  • Does he think this just doesn’t happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

So what exactly does he disagree with me on? 

He also had a very interesting exchange with another commenter. That thread got quite long, and fuzziness by its nature doesn’t lend itself to excerpts, so you should read the whole thing, but I will share highlights. 

Before the screenshot: Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it’s fine because if people get tired they can go to a doctor.

That reply doesn’t contain any false statements, and would be perfectly reasonable if we were talking about ER triage protocols. But it’s irrelevant when the conversation is “can we count on veganism-induced fatigue being caught?”. (The answer is no, and only some of the reasons have been brought up here)

You can see how the rest of this conversation worked out in the Sound and Fury section.

A much, much milder example can be seen in What vegan food resources have you found useful?. This was my attempt to create something uncontroversially useful, and I’d call it a modest success. The post had 20-something karma on LW and EAForum, and there were several useful-looking resources shared on EAForum. But it also got the following comment on LW: 

I picked this example because it only takes a little bit of thought to see the jujitsu, so little it barely counts. He disagreed with my implicit claim that… well okay here’s the problem. I’m still not quite sure where he disagrees. Does he think everyone automatically eats well as a vegan? That no one will benefit from resources like veganhealth.org? That no one will benefit from a cheat sheet for vegan party spreads? That there is no one for whom veganism is challenging? He can’t mean that last one because he acknowledges exceptions in his later comment, but only because I pushed back. Maybe he thinks that the only vegans who don’t follow his steps are those with medical issues, and that no-processed-food diets are too unpopular to consider? 

I don’t think this was deliberately anti-truthseeking, because if it was he would have stopped at “nothing special” instead of immediately outlining the special things his partner does. That was fairly epistemically cooperative. But it is still an example of strong claims made only implicitly. 

Counter-Examples

I think this comment makes a claim (“vegans moving to naive omnivorism will hurt themselves”) clearly, and backs it up with a lot of details.

The tone is kind of obnoxious and he’s arguing with something I never claimed, but his beliefs are quite clear. I can immediately understand which beliefs of his I agree with (“vegans moving to naive omnivorism will hurt themselves” and “that would be bad”) and make good guesses at implicit claims I disagree with (“and therefore we should let people hurt themselves with naive veganism”? “I [Elizabeth] wouldn’t treat naive mass conversion to omnivorism seriously as a problem”?). That’s enough to count as epistemically cooperative.

Sound and fury, signifying no substantial disagreement 

Sometimes someone comments with an intense, strongly worded, perhaps actively hostile, disagreement. After a laborious back and forth, the problem dissolves: they acknowledge I never held the position they were arguing with, or they don’t actually disagree with my specific claims. 

Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.

The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.

I do feel like vegan advocates are entitled to a certain amount of defensiveness. They encounter large amounts of concern trolling and outright hostility, and it makes sense that that colors their interactions. But that allowance covers one comment, maybe two, not three to eight (Wilkox, depending on which ones you count). 

For example, I’ve already quoted Wilkox’s very fuzzy comment (reminder: this was the top voted comment on that post on LW). That was followed by a 13+ comment exchange in which we eventually found he had little disagreement with any of my claims about vegan nutrition, only the importance of these facts. There really isn’t a way for me to screenshot this: the length and lack of specifics is the point.

You could say that the confusion stemmed from poor writing on my part, but:

I really appreciate the meta-honesty here, but since the exchange appears to have eaten hours of both of our time just to dig ourselves out of a hole, I can’t get that excited about it. 

Counter-Examples

I want to explicitly note that Sound and Fury isn’t the same as asking questions or not understanding a post. E.g. here Ben West identifies a confusion, asks me, and accepts both my answer and an explanation of why answering is difficult. 

Or in that same post, someone asked me to define nutritionally dense. It took a bit for me to answer and we still disagreed afterward, but it was a great question and the exchange felt highly truthseeking.  

Bad sources, badly handled 

Citations should be something of a bet: if the citation (the source itself or your summary of it) is high quality and supports your point, that should move people closer to your views. But if they identify serious relevant flaws, that should move both you and your audience closer to their point of view. Of course our beliefs are based on a lot of sources and it’s not feasible or desirable to really dig into all of them for every disagreement, so the bet may be very small. But if you’re not willing to defend a citation, you shouldn’t make it.

What I see in EA vegan advocacy is deeply terrible citations, thrown out casually, and abandoned when inconvenient. I’ve made something of a name for myself checking citations and otherwise investigating factual claims from works of nonfiction. Of everything I’ve investigated, I think citations from EA vegan advocacy have the worst effort:truth ratio. Not outright more falsehoods, I read some pretty woo stuff, but those can be dismissed quickly. Citations in vegan advocacy are often revealed to be terrible only after great effort.

And having put in that effort, my reward is usually either crickets, or a new terrible citation. Sometimes we will eventually drill into “I just believe it”, which is honestly fine. We don’t live our lives to the standard of academic papers. But if that’s your reason, you need to state it from the beginning. 

For example, in the top voted comment on Change My Mind post on EAF,  Rockwell (head of EA NYC) has five links in her post. Only links 1 and 4 are problems, but I’ll describe them in order to avoid confusion.

Of the five links: 

  1. Wilkox’s comment on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn’t happened at the time of linking).
  2. cites my past work, if anything too generously.
  3. an estimation of nutrient deficiency in the US. I don’t love that this uses dietary intake as opposed to testing values (people’s needs vary so wildly), but at least it used EAR and not RDA. I’d want more from a post but for a comment this is fine.
  4. an absolutely atrocious article, which the comment further misrepresents. We don’t have time to get all the flaws in that article, so I’ve put my first hour of criticisms in the appendix. What really gets me here is that I would have agreed the standard American diet sucks without asking for a source. I thought I had conceded that point preemptively, albeit not naming Standard American Diet explicitly.

    And if she did feel a need go the extra mile on rigor for this comment, it’s really not that hard to find decent-looking research about the harms of the Standard Shitty American Diet.  I found this paper on heart disease in 30 seconds, and most of that time was spent waiting for Elicit to load. I don’t know if it’s actually good, but it is not so obviously farcical as the cited paper.
  5. The fifth link goes to a description of the Standard American Diet. 

Rockwell did not respond to my initial reply (that fixing vegan issues is easier than fixing SSAD), or my asking if that paper on the risks of meat eating was her favorite.

A much more time-consuming version of this happened with Adventist Health Study-2. Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism). There’s one commenter on LessWrong and two on EAForum (one of whom had previously co-authored a blog post on the study and offered to answer questions). As I discussed here, that study is one of the best we have on nutrition and I’m very glad people brought it to my attention. But calling it a pseudo-RCT that supports veganism is deeply misleading. It is nowhere near randomized, and doesn’t cleanly support veganism even if you pretend it is.

(EDIT 2023-10-03: To be clear, the noise in the study overwhelms most differences in outcomes, even ignoring the self-sorting. My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported. One commenter has said she only meant it as evidence that a vegan diet can work for some people, which I agree with, as stated in the post she was responding to. She disagrees with other parts of my summary as well, you can read more here)

It’s been three months, and none of the recommenders have responded to my analysis of the main AHS-2 paper, despite repeated requests. 

But finding a paper is of lower quality and supports an entirely different conclusion is still not the worst-case scenario. The worst outcome is citation whack-a-mole.

A good example of this is from the post “Getting Cats Vegan is Possible and Imperative”, by Karthik Sekar. Karthik is a vegan author and data scientist at a plant-based meat company. 

[Note that I didn’t zero out my votes on this post’s comments, because it seemed less important for posts I didn’t write]

Karthik cites a lot of sources in that post. I picked what looked like his strongest source and investigated. It was terrible. It was a review article, so checking it required reading multiple studies. Of the cited studies, only 4  (with a total of 39 combined subjects) use blood tests rather than owner reports, and more than half of those were given vegetarian diets, not vegan (even though the table header says vegan). The only RCT didn’t include carnivorous diets. 

Karthik agrees that paper (that he cited) is not making its case “strong nor clear”, and cites another one (which AFAICT was not in the original post).

I dismiss the new citation on the basis of “motivated [study] population and minimal reporting”. 

He retreats to “[My] argument isn’t solely based on the survey data. It’s supported by fundamentals of biochemistry, metabolism, and digestion too […] Mammals such as cats will digest food matter into constituent molecules. Those molecules are chemically converted to other molecules–collectively, metabolism–, and energy and biomass (muscles, bones) are built from those precursors. For cats to truly be obligate carnivores, there would have to be something exceptional about meat: (A) There would have to be essential molecules–nutrients–that cannot be sourced anywhere else OR (B) the meat would have to be digestible in a way that’s not possible with plant matter. […So any plant-based food that passes AAFCO guidelines is nutritionally complete for cats. Ami does, for example.]

I point out that AAFCO doesn’t think meeting their guidelines is necessarily sufficient. I expected him to dismiss this as corporate ass-covering, and there’s a good chance he’d be right. But he didn’t.

Finally, he gets to his real position:

Which would have been a fine aspirational statement, but then why include so many papers he wasn’t willing to stand behind? 

On that same post someone else says that they think my concerns are a big deal, and Karthik probably can’t convince them without convincing me. Karthik responds:

So he’s conceded that his study didn’t show what he claimed. And he’s not really defending the AAFCO standards. But he’s really sure this will work anyway? And I’m the one who won’t update their beliefs. 

In a different comment the same someone else notes a weird incongruity in the paper. Karthik doesn’t respond.

This is the real risk of the bad sources: hours of deep intellectual work to discover that his argument boils down to a theoretical claim the author could have stated at the beginning. “I believe vegan cat food meets these numbers and meeting these numbers is sufficient” honestly  isn’t a terrible argument, and I’d have respected it plainly stated, especially since he explicitly calls for RCTs. Or I would, if he didn’t view those RCTs primarily as a means to prove what he already knows.  

Counter-Examples

This commenter starts out pretty similarly to the others, with a very limited paper implied to have very big implications. But when I push back on the serious limitations of the study, he owns the issues and says he only ever meant the paper to support a more modest claim (while still believing the big claim he did make?). 

Taxing Facebook

When I joined EA Facebook in 2014, it was absolutely hopping. Every week I met new people and had great discussions with them where we both walked away smarter. I’m unclear when this trailed off because I was drifting away from EA at the same time, but let’s say the golden age was definitively over by 2018. Facebook was where I first noticed the pattern with EA vegan advocacy. 

Back in 2014 or 2015, Seattle EA watched some horrifying factory farming documentaries, and we were each considering how we should change our diets in light of that new information. We tried to continue the discussion on Facebook, only to have Jacy Reese Anthis (who was not a member of the local group and AFAIK had never been to Seattle) repeatedly insist that the only acceptable compromise was vegetarianism, humane meat doesn’t exist, and he hadn’t heard of health conditions benefiting from animal products so my doctor was wrong (or maybe I made it up?). 

I wish I could share screenshots on this, but the comments are gone (I think because the account has been deleted). I’ve included shots of the post and some of my comments (one of which refers to Jacy obstructing an earlier conversation, which I’d forgotten about). A third commenter has been cropped out, but I promise it doesn’t change the context.

(his answer was no, and that either I or my doctor were wrong because Jacy had never heard of any medical issue requiring consumption of animal products)

That conversation went okay. Seattle EA discussed suffering math on different vertebrates, someone brought up eating bugs, Brian Tomasik argued against eating bugs. It was everything an EA conversation should be.

But it never happened again.

Because this kind of thing happened every time animal products, diet, and health came up anywhere on EA Facebook. The commenters weren’t always as aggressive as Jacy, but they added a tremendous amount of cumulative friction. An omnivore would ask if lacto-vegetarianism worked, and the discussion would get derailed by animal advocates insisting you didn’t need milk.  Conversations about feeling hungry at EAG inevitably got a bunch of commenters saying they were fine, as if that was a rebuttal. 

Jeff Kaufman mirrors his FB posts onto his actual blog, which makes me feel more okay linking to it. In this post he makes a pretty clear point- that veganism can be any of cheaper, or healthier, or tastier, but not all at once.  He gets a lot of arguments. One person argues that no one thinks that, they just care about animals more. 

One vegetarian says they’d like to go vegan but just can’t beat eggs for their mix of convenience, price, macronutrients, and micronutrients. She gets a lot of suggestions for substitutes, all of which flunk on at least one criterion.  Jacy Reese Anthis has a deleted comment, which from the reply looks like he asserted the existence of a substitute without listing one. 

After a year or two of this, people just stopped talking about anything except the vegan party line on public FB. We’d bitch to each other in private, but that was it. And that’s why, when a new generation of people joined EA and were exposed to the moral argument for veganism, there was no discussion of the practicalities visible to them. 

[TBF they probably wouldn’t have seen the conversations on FB anyway, I’m told that’s an old-person thing now. But the silence has extended itself]

Ignoring known falsehoods until they’re a PR problem

This is old news, but: for many years ACE said leafletting was great. Lots of people (including me and some friends, in 2015) criticized their numbers. This did not seem to have much effect; they’d agree their eval was imperfect and they intended to put up a disclaimer, but it never happened.

In late 2016 a scathing anti-animal-EA piece was published on Medium, making many incendiary accusations, including that the leafleting numbers are made up. I wouldn’t call that post very epistemically virtuous; it was clearly hoping to inflame more than inform. But within a few weeks (months?), ACE put up a disavowal of the leafletting numbers.

I unfortunately can’t look up the original correction or when they put it up; archive.org behaves very weirdly around animalcharityevaluators.org. As I remember it made the page less obviously false, but the disavowal was tepid and not a real fix. Here’s the 2022 version:

There are two options here: ACE was right about leafleting, and caved to public pressure rather than defend their beliefs. Or ACE was wrong about leafleting (and knew they were wrong, because they conceded in private when challenged) but continued to publicly endorse it.

Why I Care

I’ve thought vegan advocates were advocating falsehoods and stifling truthseeking for years. I never bothered to write it up, and generally avoided public discussion, because that sounded like a lot of work for absolutely no benefit. Obviously I wasn’t going to convince the advocates of anything, because finding the truth wasn’t their goal, and everyone else knew it so what did it matter? I was annoyed at them on principle for being wrong and controlling public discussion with unfair means, but there are so many wrong people in the world and I had a lot on my plate. 

I should have cared more about the principle.

I’ve talked before about the young Effective Altruists who converted to veganism with no thought for nutrition, some of whom suffered for it. They trusted effective altruism to have properly screened arguments and tell them what they needed to know. After my posts went up I started getting emails from older EAs who weren’t getting the proper testing either; I didn’t know because I didn’t talk to them in private, and we couldn’t discuss it in public. 

Which is the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed. 

What do EA vegan advocates need to do?

  1. Acknowledge that nutrition is a multidimensional problem, that veganism is a constraint, and that adding constraints usually makes problems harder, especially if you’re already under several.
  2. Take responsibility for the nutritional education of vegans you create. This is not just an obligation, it’s an opportunity to improve the lives of people who are on your side. If you genuinely believe veganism can be nutritionally whole, then every person doing it poorly is suffering for your shared cause for no reason.
    1. You don’t even have to single out veganism. For purposes of this point I’ll accept “All diet switches have transition costs and veganism is no different, and the long term benefits more than compensate”. I don’t think your certainty is merited, and I’ll respect you more if you express uncertainty, but I understand that some situations require short messaging and am willing to allow this compression.
  3. Be epistemically cooperative, at least within EA spaces. I realize this is a big ask because in the larger world people are often epistemically uncooperative towards you. But obfuscation is a symmetric weapon and anger is not a reason to believe someone. Let’s deescalate this arms race and have both sides be more truthseeking.

    What does epistemic cooperation mean?
    1. Epistemic legibility. Make your claims and cruxes clear. E.g. “I don’t believe iron deficiency is a problem because everyone knows to take supplements and they always work” instead of “Why are you bothering about iron supplements?”
    2. Respond to the arguments people actually make, or say why you’re not. Don’t project arguments from one context onto someone else. I realize this one is a big ask, and you have my blessing to go meta and ask work from the other party to make this viable, as long as it’s done explicitly. 
    3. Stop categorically dismissing omnivores’ self-reports. I’m sure many people do overestimate the difficulties of veganism, but that doesn’t mean it’s easy or even possible for everyone.
      1. A scientific study, no matter how good, does not override a specific person telling you they felt hungry at a specific time. 
    4. If someone makes a good argument or disproves your source, update accordingly. 
  4. Police your own. If someone makes a false claim or bad citation while advocating veganism, point it out. If someone dismisses a detailed self-report of a failed attempt at veganism, push back. 

All Effective Altruists need to stand up for our epistemic commons

Effective Altruism is supposed to mean using evidence and reason to do the most good. A necessary component of that is accurate evidence. All the spreadsheets and moral math in the world mean nothing if the input is corrupted. There can be no consequentialist argument for lying to yourself or allies1 because without truth you can’t make accurate utility calculations2. Garbage in, garbage out.

One of EA’s biggest assets is an environment that rewards truthseeking more than average. Without uniquely strong truthseeking, EA is just another movement of people who are sure they’re right. But high truthseeking environments are fragile, exploiting them is rewarding, and the costs of violating them are distributed and hard to measure. The only way EA’s has a chance of persisting is if the community makes preserving it a priority. Even when it’s hard, even when it makes people unhappy, and even when the short term rewards of defection are high. 

How do we do that? I wish I had a good answer. The problem is complicated and hard to reason about, and I don’t think we understand it enough to fix it. Thus far I’ve focused on vegan advocacy as a case study in destruction of the epistemic commons because its operations are relatively unsophisticated and easy to understand. Next post I’ll be giving more examples from across EA, but those will still have a bias towards legibility and visibility. The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet. 


Acknowledgments

Thanks to the many people I’ve discussed this with over the past few months. 

Thanks to Patrick LaVictoire and Aric Floyd for beta reading this post.

Thanks to Lightspeed Grants for funding this work. Note: a previous post referred to my work on nutrition and epistemics as unpaid after a certain point. That was true at the time and I had no reason to believe it wouldn’t stay true, but Lightspeed launched a week after that post and was an unusually good fit so I applied. I haven’t received a check yet but they have committed to the grant so I think it’s fair to count this as paid. 

Appendix

Terrible anti-meat article

  • The body of the paper is an argument between two people, but the abstract only includes the anti-animal-product side.
  • The “saturated fat” and “cholesterol” sections take as a given that any amount of these is bad, without quantifying or saying why. 
  • The “heme iron” section does explain why excess iron is bad, but ignores the risks of too little. Maybe he also forgot women exist? 
  • The lactose section does cite two papers, one of which does not support his claim, and the other of which is focused on mice who received transplants. It probably has a bunch of problems but it was too much work to check, and even if it doesn’t, it’s about a niche group of mice. 
  • The next section claims milk contains estrogen and thus raises circulating estrogen, which increases cancer risk.
    • It cites one paper supporting a link with breast cancer. That paper found a correlation with high fat but not low fat dairy, and the correlation was not statistically significant. 
    • It cites another paper saying dairy impairs sperm quality. This study was done at a fertility clinic, so will miss men with healthy sperm counts and is thus worthless. Ignoring that, it found a correlation of dairy fat with low sperm count, but low-fat dairy was associated with higher sperm count. Again, among men with impaired fertility. 
  • The “feces” section says that raw meat contains harmful bacteria (true), but nothing about how that translates to the risks of cooked meat.

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.] 

  1. ”Are you saying I can’t lie to Nazis about the contents of my attic?” No more so than you’re banned from murdering them or slashing their tires. Like, you should probably think hard about how it fits into your strategy, but I assumed “yourself or allies” excluded Nazis for everyone reading this. 

    “Doesn’t that make the definition of enemies extremely morally load bearing?” It reflects that fact, yes. 

    “So vegan advocates can morally lie as long as it’s to people they consider enemies?”  I think this is, at a minimum, defensible and morally consistent. In some cases I think it’s admirable, such as lying to get access to a slaughterhouse in order to take horrifying videos. It’s a declaration of war, but I assume vegan advocates are proud to declare the meat industry their enemy. ↩︎
  2. I’ll allow that it’s conceptually possible to make deontological or virtue ethics arguments for lying to yourself or allies, but it’s difficult, and the arguments are narrow and/or wrong. Accurate beliefs turn out to be critical to getting good outcomes in all kinds of situations.  ↩︎

Edits

You will notice a few edits in this post, which are marked with the edit date. The original text is struck through.

When I initially published this post on 2023-09-28, several images failed to copy over from the google doc to the shitty WordPress editor. These were fixed within a few hours.

I tried to link to sources for every screenshot (except the Facebook ones). On 2023-10-05 I realized that a lot of the links were missing (but not all, which is weird) and manually added them back in. In the process I found two screenshots that never had links, even in the google doc, and fixed those. Halfway through this process the already shitty editor flat out refused to add links to any more images. This post is apparently already too big for WordPress to handle, so every attempted action took at least 60 seconds, and I was constantly afraid I was going to make things worse, so for some images the link is in the surrounding text. 

If anyone knows of a blogging website that will gracefully accept cut and paste from google docs, please let me know. That is literally all an editor takes to be a success in my book and last time I checked I could not find a single site that managed it.

Luck based medicine: inositol

Summary: Do you have weird digestive symptoms and anxiety or depression? Consider trying inositol (affiliate link), especially if the symptoms started after antibiotics.

Epistemic status: I did some research on this 10 years ago and didn’t write it down. In the last nine months I recommended it to a few people who (probably) really benefited from it. My track record on this kind of suggestion is mixed; the Apollo Neuro was mostly a dud but iron testing caught a lot of issues. 

Background

Inositol is a form of sugar. It’s used in messaging between cells in your body, which means it could in theory do basically anything. In practice, supplementation has been found maybe-useful in many metabolic and psychiatric issues, although far from conclusively. 

There are a few sources of inositol: it’s in some foods, especially fruit. Your body naturally manufactures some. And some gut bacteria produce it. If your gut bacteria are disrupted, you may experience a sudden drop in available inositol, which can lead to a variety of symptoms including anxiety and depression.

Anecdata

Inositol deficiency (probably) hit me hard 9 years ago, when I went on a multi-month course of some very hardcore antibiotics to clear out a suspected SIBO infection.

Some background: My resistance to Seasonal Affective Disorder has been thoroughly tested and found triumphant.  At the time I took those antibiotics I lived in Seattle, which gets 70 sunny days per year, concentrated in the summer. This was a step up from my hometown, which got 60 sunny days per year. I briefly experimented with sunshine in college, where I saw 155 sunny days per year, a full 75% of the US average. The overcast skies never bothered me, and I actively liked Seattle’s rain. So when I say I do not get Seasonal Affective Disorder or light-sensitive depression, I want you to understand my full meaning. Darkness has no power over me. 

That is, until I took those antibiotics. I was fine during the day, but as soon as sun set (which was ~5PM, it was Seattle in January) I experienced crushing despair. I don’t know if it was the worst depression in my life, or just the most obvious because it went from 0 to doom 15 minutes. 

Then I started taking inositol and the despair went away, even though I was on the antibiotics for at least another month. After the course finished I took some probiotics, weaned off the inositol, and was fine. 

About six months ago, my friend David MacIver mentioned a combination of mood and digestive issues, and I suggested inositol. It worked wonders.

He’s no longer quite so deliriously happy as described in the tweet, but still describes it as “everything feels easier”, and every time he lowers his dose things get worse. So seems likely this is a real and important effect

He’s also tried probiotics. It took several false starts, but after switching brands and taking them very consistently he was able to lower his dosage of inositol, and the effects of going off it are less dramatic (although still present).

He has a fairly large twitter following, so when he tweeted about inositol he inspired a fair number of people to try it. He estimates maybe 50 people tried it, and 2-5 reported big benefits. So ballpark 4-10% response rate (of people who read the tweet and thought it looked applicable). And most people respond noticeably to the first dose (not me, I think it took me a few days, but most people), so it’s very easy to test. 

A second friend also got very good results, although they have more problems and haven’t tested themselves as rigorously as David, so causality is more questionable. 

Fun fact: because inositol is a cheap, white, water soluble powder it’s used as a cutting agent for multiple street drugs. It’s also the go-to substitute for cocaine in movies. So if cocaine, heroin, or meth have weirdly positive effects on you, might be worth checking out.

Practicalities

Anything with a real effect can hurt you. Even that totally safe magic bracelet I recommended maybe gave someone nightmares. But as these things go, inositol is on the safer end to experiment with. The fact that it’s both a natural food product and produced endogenously gives you good odds, especially compared to cocaine. OTOH the fact that it has a lot of sources makes it harder to dose – after a few months David found that his initial dose was too high and induced brain fog, and he needed to lower it. 

I have a vague impression that quality isn’t an issue with inositol the way it is with some vitamins, so I just linked to the cheapest ones. 

In terms of dose: standard dosage is 0.5- 2g/day. David started at high end of that but is now down to 0.5-1g. I can’t remember what I did. If you try it, start small and experiment. 

If you do try it, I’d love if you filled out this form letting me know how it went.

Thanks to David Maciver for sharing his data. 

Luck based medicine: angry eldritch sugar gods edition

Introduction

Epistemic status: everything is stupid. I’m pretty sure I’m directionally right but this post is in large part correcting previous statements of mine, and there’s no reason to believe this is the last correction. Even if I am right, metabolism is highly individual and who knows how much of this applies to anyone else.

This is going to get really in the weeds, so let me give you some highlights

  • 1-2 pounds of watermelon/day kills my desire for processed desserts, but it takes several weeks to kick in.
    • It is probably a microbiome thing. I have no idea if this works for other people. If you test it let me know.
    • I still eating a fair amount of sugar, including processed sugar in savory food. The effect is only obvious and total with desserts. 
  • This leads to weight loss, although maybe that also requires potatoes? Or the potatoes are a red herring, it just takes a while to kick in? 
  • Boswellia is probably necessary for this to work in me, but that’s probably correcting an uncommon underlying defect so this is unlikely to apply widely. 
  • Stevia-sweetened soda creates desire for sugar in me, even though it doesn’t affect my blood sugar. This overrides the watermelon effect, even when I’m careful to only drink the soda with food.
  • My protein shakes + bars also have zero-calorie sweeteners and the watermelon effect survives them. Unclear if it’s about the kind of sweetener, amount, or something else.
  • Covid also makes me crave sugar and this definitely has a physiological basis.
  • Metabolism is a terrifying eldritch god we can barely hope to appease, much less understand. 

Why do I believe these things? *Deep breath* this is going to take a while. I’ve separated sections by conclusion for comprehensibility, but the discovery was messy and interconnected and I couldn’t abstract that out. 

Boswellia

Last October I told my story of luck based medicine, in which a single pill, taken almost at random, radically fixed lifelong digestion issues. Now’s as good a time as any to give an update on that. 

The two biggest effects last year were doubling my protein consumption, and cratering sugar consumption. I’m still confident Boswelia is necessary for protein digestion, because if I go off it food slowly starts to feel gross and I become unable to digest protein. I’m confident this isn’t a placebo because I didn’t know Boswelia was the cause at the time, so going off it shouldn’t have triggered a change. 

As I’ll discuss in a later section, Boswelia is not sufficient to cause a decrease in sugar consumption; that primarily comes from consuming heroic amounts of watermelon. The Boswellia might be necessary to enable that much watermelon consumption, by increasing my ability to digest fiber. I haven’t had to go off Boswellia since I figured out how it helps me, so I haven’t tested its interaction with watermelon. 

How does Boswellia affect micronutrient digestion? I have always scored poorly on micronutrient tests. I had a baseline test from June 2022 (shortly after starting Boswellia + watermelon), and saw a huge improvement in October testing (my previous tests are alas too old to be accessible). Unfortunately this did not hold up – my March and June 2023 tests were almost as bad as June 2022. My leading hypotheses are “the tests suck” and “the November tests are the only ones taken after a really long no-processed-dessert period, and sugar is the sin chemical after all”. I hate both of these options. 

If we use fuzzier standards like energy level, illness, and injury healing, I’m obviously doing much better. Causality is always hard when tracking effects that accumulate over a year. In that time there’s been at least one other major intervention that contributed to energy levels and mood, and who knows what minor stuff happened without me noticing. But I’d be shocked if improved nutrition wasn’t a major contributor to this. 

Illness-wise; I caught covid for the second time in late November 2022, and it was a shorter illness with easier recovery  than in April (before any of these interventions started). But that could be explained by higher antibody levels alone. I haven’t gotten sick since then (9 months), which would have been an amazing run for me pre-2022.

My protein consumption (previously 30-40g/day) spiked after I started Boswellia in May 2022 (~100g/day) and then slowly came down. Before November covid I was at ~70g/day. My explanation at the time was that my body had some repairs it had been putting off until the protein was available, and once those were done it didn’t need so much. It spiked again after November covid and only partially came down, I’m still averaging ~100g/day. I’m not sure if I still need that for some reason, or if I’m just craving more calories and satisfying that partially with protein. 

Watermelon

The cure to all my dieting woes?

In spring 2022 I started eating 1-2 pounds of watermelon per day. This wasn’t a goal-oriented diet or anything, I just really like watermelon and finally realized the only limitations were in my mind. I started eating watermelon when it came into season, before I got covid in April 2022, but didn’t start the serious habit until the later half of my very long covid case. As previously discussed, that May my doctor prescribed me Boswellia in part to aid covid recovery, and a bunch of good things followed (including a disinterest in processed sugar), all of which I attributed to the Boswelia. 

The loss of interest in sugar was profound. It wasn’t just that I gained the ability to resist temptation; I mostly didn’t enjoy sugar when I had it. I went from a bad stress eating habit to just… not thinking of sugar as an option when stressed. 

My interpretation at the time was “sugar cravings were a pica for real nutrition, as soon as I could digest enough food they naturally went away, fuck you doctors”. It never occured to me the watermelon might be involved because in my mind it was categorized as “indulgence” not “intervention”.  Even if it had occured to me to test it, I know now the effect takes at least six weeks to kick in, and I wouldn’t have waited that long. Lucky for science, reality was going to force my hand. 

Around October I started wanting sugar again, although not as much as before. I put this down to stress, but that never really made sense: August saw me break my wrist and have a very stressful interpersonal issue without any return in cravings. Then in November I got covid again, and it again created intense sugar cravings, which improved some but never really went away. I thought maybe covid had permanently broken my metabolism. I played with the Boswellia dosing for a while but it didn’t seem to make a difference. Plus my protein digestion stayed the same the whole time, so it seemed unlikely Boswellia had just stopped working. 

In February 2023 I talked about this with David MacIver of Overthinking Everything, and noticed that the sugar cravings had returned a few weeks after watermelon had gone out of season. I created a graph from my food diary, and it became really obvious watermelon was the culprit.

The effect is even stronger than it looks, because watermelon has sucrose in it. Over summer my sucrose intake is 90% from fruit; over winter it’s dominated by junk. 

I figured this out in February 2023. Because I live in a port city in a miracle world, that was late enough in the season to get mediocre watermelon. It took a while to work, but that was true the first time as well (somewhere between 6 weeks and 12, depending on how you count covid time). And you can indeed see where this started on the graph. But five months later it is still replicating the previous year’s success. Sugar cravings were weaker but still present, and certainly came back when I was stressed. The weight loss was slow and stuttering where it had previously been easy.  That brings me to my next point.

Stevia-sweetened Soda

In January 2023 my doctor gave me a continuous glucose monitor to play around with. The thought process here was…exploratory.  Boswellia is known to lower blood sugar, and helped with (I thought) sugar cravings and many other issues. Covid causes sugar cravings in me, is known to hurt diabetics more, and causes lots of other problems, so maybe that points to blood sugar issues in me? Also I was kind of hoping the immediate feedback would nudge me to eat less sugar.

One of the foods I was most excited to test was 0-calorie soda. I’d always avoided diet soda on the belief that no-calorie sweeteners spiked your insulin and this led to sugar cravings that left you worse off. But when I tried stevia-sweeted Zevia with the GCM my blood glucose levels didn’t move at all, and I didn’t feel any additional drive to eat sugar (compared to my then-high baseline desire – remember this was while I was off watermelon and dessert was amazing). 

I was extremely excited about this discovery. I’d given up cola about 18 months before and missed it dearly. Now I had chemical and mathematical proof that no-cal soda was fine. When I had the cola again it made me so happy I was amazed I’d ever managed to give it up before. I began a 2-3 cans/day caffeine-free Zevia cola habit. 

This would have been 4-6 weeks before I restarted on watermelon. When the watermelon failed to repeat its miracle I suspected Zevia fairly quickly, but I really, really didn’t want it to be true so I tested some other things first. Finally I had ruled out too many other things, and had a particularly clarifying experience of sudden, strong cravings with nothing else to blame.  I gave up Zevia, and immediately lost all desire for sugar. In retrospect the low-sugar-desire days of the previous months were probably not random, but days I happened to not drink Zevia. 

Unfortunately this doesn’t obviously show up on the fructose vs sucrose graph and I don’t care enough to export the data and do real statistics. It also doesn’t show up cleanly on my weight graph, because weight is noisy and the effect operates at a substantial delay. But I’ve given it 7 weeks, and I’m definitely losing weight. The current streak is faster than anything I had last year, although it’s too soon to say if that will hold up. . . 

Zevia is sweeted with stevia, which seems like it should make stevia an enemy. But my protein shake is also sweetened with stevia (plus something else), and I had at least a bottle/day during the weight loss last year (this is one excuse I gave for testing other potential culprits before Zevia). Maybe the issue is the amount in Zevia, maybe drinking stevia with a lot of protein is better than separately, even when it’s with a meal. Maybe Mercury was in retrograde I give up. 

Potatoes and Weight Loss

The no-sugar effect had kicked in in earnest by late May 2022 (I didn’t start a food diary until late June that year, but confirmed the May date by looking at my grocery orders). In July 2022 I went on minimum viable potato diet, in which I ate a handful of baby potatoes every day and demanded nothing else of myself. Within a few days my caloric intake dropped dramatically, and I started to lose weight. This continued late fall, when watermelon went out of season. 

The post-potato weight loss was weird, and seemed much too large to be produced by a handful of potatoes. But it was such a strong effect that started so quickly after potatoes that it seemed impossible to be a coincidence. 

I mentioned before that the watermelon works on a delay. So maybe I just happened to start eating potatoes the day the watermelon effect kicked in (I don’t have exact timing for this – the first run is complicated by covid and the second by stevia soda and work stress). Maybe I need watermelon and potatoes for some stupid reason. Maybe that 100g of produce was a tipping point, but 100g of anything would have worked. Or maybe that’s just when the summer heat kicked in, since I’ve always eaten less when it was hot out. Maybe it’s not a coincidence the current weight loss kicked in shortly after finishing a stressful, air conditioned gig… 

Questions

What is happening with sugar?

The stevia effect appears to be same-day, often kicking in within a few hours and wearing off by the next day, so I assume that’s a metabolic issue. But not one reflected in blood sugar, according to the CGM. Unless the effect kicks in slowly (while also stopping the day I stop drinking it). 

The no-sugar effect watermelon takes 4-8 weeks to kick in, and another 4+ to cause weight loss. But between those milestones… I don’t want to brag, but it’s relevant so I think I have to. ~10 weeks after 1-2lbs of watermelon/day my feces become amazing. So amazing they look fake, like they were crafted for an ad for a fiber pill. Enormous without being at all painful because of their perfect consistency. Other bowel movements resent mine for setting unrealistic beauty standards, but they can’t help it, that’s just how they naturally look. 

Between the delay and the gold-ribbon poops, I’m pretty sure the watermelon effect works through microbiome changes. Maybe fiber, maybe feeding different bacteria, maybe changing the metabolism of the existing bacteria?

Are you one of those idiots who thinks processed sugar is different than natural sugar?

I regret that I have to answer this with “maybe”.  Processed is not well defined here, but it sure seems like a Calorie from watermelon hits me differently than a Calorie from marshmallows; even if they fuel the same amount of metabolism. My guesses for what actually matters are fiber, fructose vs sucrose, water content, and “fuck if I know”. If anyone tries a sugar water + fiber pills diet, please do let me know. 

I try to be careful to say I haven’t given up processed sugar, just processed desserts. Lots of savory dishes have a fair amount of sugar (not just carbs) in them. My guess is that if easy, sugar-free prepared food was available I wouldn’t miss the sugar, but in the real world it is too much work to cut it out and I seem to be doing okay as-is 

Does it have to be watermelon?

If I’m right about the fructose, fiber, and/or water, no, but I haven’t tested it. Watermelon does have a pretty favorable mix of those (grapes have way more sugar per gram), but its primary virtue is that it is obviously the best fruit and I’d struggle to eat that much of anything else on purpose, much less do so accidentally for months straight. In fact I did try to replace watermelon that winter, but couldn’t find anything I’d eat that much of.  

However someone on twitter pointed out that watermelon is unusually high in citrulline (which is used to produce arginine, which is metabolically important). There’s no way I could detect this trend against my overall increase in protein uptake, so I can’t do more than pass this on. If California fails to deliver truly year round watermelon my plan is carrots + citruilline pills, so maybe we’ll find out then.  

Why does Boswellia help protein digestion?

I don’t know. Digestion isn’t even included on the list of common effects of Boswellia. All any doctor will tell me is “something something inflammation”, as if I haven’t taken dozens of things with equally strong claims to reducing inflammation.

Last year a reader connected me with a friend who had some very interesting ideas about mast cell issues, and I swear I’m going to look into those any day now… 

Conclusion

Everything is stupid, nothing makes sense. If hadn’t lucked into a situation where I was using Boswellia, eating stupid amounts of watermelon, and consuming no Zevia I might never have found out this no-sugar-desire state and would be at least 30 pounds heavier. 

PS

If you find yourself thinking “this is great, I’m so sad Elizabeth only publishes a few extremely long posts per year about metabolism that prove nothing”, I have good news! The Experimental Fat Loss substack features multiple posts per month in exactly that genre. It mostly follows the author’s fairly rigorous dietary experiments, but lately he’s been taking case studies from other people as well.