Grant Making and Grand Narratives

Another inside baseball EA post

The Lightspeed application asks:  “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”

LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”). 

I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail.  But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I  believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.

I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).

I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.

My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start. 
 

More on the costs of the question

Pushes away the most motivated people

Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad. 

[*Although there are also downsides to organizers with sufficiently bad epistemics.]

Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
 

Vulnerable to grift

You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems. 

 

Punishes underconfidence

Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated. 

Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
 

Corrupts epistemics

Not that much. But I think it’s pretty bad if people are forced to choose between “play the game of exaggerating impact” and “go unfunded”. Even if the game is in fact learnable, it’s a bad use of their time and weakens the barriers to lying in the future. 

Pushes projects to grow beyond their ideal scope

Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.  

This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
 

Rewards cultural knowledge independent of merit

There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals. 

Brainstorming fixes

I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of. 
 

  • Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption. 
  • Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
    • E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
    • This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
    • I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope. 
    • Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed. 
  • Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
    • My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
  • Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
    • This is tricky because grants don’t necessarily mean a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable. 
    • E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism. 
  • Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
  • When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are.  You can have an optional question asking for the effects of meeting the goal.
    • I like the word “effects” more than “impact”, which is a pretty loaded term within EA and x-risk. 
  • A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that. 
  • Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with. 
  • [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
  • [Your idea here]



 

I hope the forms change to explicitly encourage things like the above list, but  I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).   

Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I’m also very interested in why you like the current forms, and what constraints shaped them.

Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am. 

Truthseeking when your disagreements lie in moral philosophy

[Status: latest entry in a longrunning series]

My last post on truthseeking in EA vegan advocacy got a lot of comments, but there’s one in particular I want to highlight as highly epistemically cooperative. I have two motivations for this:

  • having just spotlighted some of the most epistemically uncooperative parts of a movement, it feels just to highlight good ones
  • I think some people will find it surprising that I call this comment highly truthseeking and epistemically cooperative, which makes it useful for clarifying how I use those words. 

In a tangential comment thread, I asked Tristan Williams why he thought veganism was more emotionally sustainable than reducitarianism. He said:


Yeah sure. I would need a full post to explain myself, but basically I think that what seems to be really important when going vegan is standing in a certain sort of loving relationship to animals, one that isn’t grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute

I guess the first time I thought about this was with my university EA group. We had a couple of hardcore utilitarians, and one of them brought up an interesting idea one night. He was a vegan, but he’d been offered some mac and cheese, and in similar thinking to above (that dairy generally involves less suffering than eggs or chicken for ex) he wondered if it might actually be better to take the mac and donate the money he would have spent to an animal welfare org. And when he roughed up the math, sure enough, taking the mac and donating was somewhat significantly the better option.  

But he didn’t do it, nor do I think he changed how he acted in the future. Why? I think it’s really hard to draw a line in the sand that isn’t veganism that stays stable over time. For those who’ve reverted, I’ve seen time and again a slow path back, one where it starts with the less bad items, cheese is quite frequent, and then naturally over time one thing after another is added to the point that most wind up in some sort of reduceitarian state where they’re maybe 80% back to normal (I also want to note here, I’m so glad for any change, and I cast no stones at anyone trying their best to change). And I guess maybe at some point it stops being a moral thing, or becomes some really watered down moral thing like how much people consider the environment when booking a plane ticket. 

I don’t know if this helps make it clear, but it’s like how most people feel about harm to younger kids. When it comes to just about any serious harm to younger kids, people are generally against it, like super against it, a feeling of deep caring that to me seems to be one of the strongest sentiments shared by humans universally. People will give you some reasons for this i.e. “they are helpless and we are in a position of responsibility to help them” but really it seems to ground pretty quickly in a sentiment of “it’s just bad”. 

To have this sort of love, this commitment to preventing suffering, with animals to me means pretty much just drawing the line at sentient beings and trying to cultivate a basic sense that they matter and that “it’s just bad” to eat them. Sure, I’m not sure what to do about insects, and wild animal welfare is tricky, so it’s not nearly as easy as I’m making it seem. And it’s not that I don’t want to have any idea of some of the numbers and research behind it all, I know I need to stay up to date on debates on sentience, and I know that I reference relative measures of harm often when I’m trying to guide non-veg people away from the worst harms. But what I’d love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what’s better or worse and just casts it all out as beyond the pale. 

And to try to return to earlier, I guess I see taking this sort of position as likely to extend people’s time spent doing veg-related diets, and I think it’s just a lot trickier to have this sort of relationship when you are doing some sort of utilitarian calculus of what is and isn’t above the bar for you (again, much love to these people, something is always so much better than nothing). This is largely just a theory, I don’t have much to back it up, and it would seem to explain some cases of reversion I’ve seen but certainly not all, and I also feel like this is a bit sloppy because I’d really need a post to get at this hard to describe feeling I have. But hopefully this helps explain the viewpoint a bit better, happy to answer any questions 🙂

It’s true that this comment doesn’t use citations or really many objective facts. But what it does have is: 

  • A clear description of what the author believes 
  • Clear identifiers of the author’s cruxes for those beliefs
  • It doesn’t spell out every possible argument but does leave hooks, so if I’m confused it’s easy to ask clarifying questions
  • Disclaimers against common potential misinterpretations. 
  • Forthright description of its own limits
  • Proper hedging and sourcing on the factual claims it does make

This is one form of peak epistemic cooperation. Obviously it’s not the only form, sometimes I want facts with citations and such, but usually only after philosophical issues like this one have been resolved. Sometimes peak truthseeking looks like sincerely sharing your beliefs in ways that invite other people to understand them, which is different than justifying them. And I’d like to see more of that, everywhere.

PS. I know I said the next post would be talking about epistemics in the broader effective altruism community. Even as I wrote that sentence I thought “Are you sure? That’s been your next post for three or four posts now, writing this feels risky”, and I thought “well I really want the next post out before EAG Boston and that doesn’t leave time for any more diversions, we’re already halfway done and caveating “next” would be such a distraction…”. Unsurprisingly I realized the post was less than halfway done and I can’t get the best version done in time for EAG Boston, at which point I might as well write it at a leisurely pace

PPS. Tristan saw a draft of this post before publishing and had some power to veto or edit it. Normally I’d worry that doing so would introduce some bias, but given the circumstances it felt like the best option. I don’t think anyone can accuse me of being unwilling to criticize EA vegan advocacy epistemics, and I was worried that hearing “hey I want to quote your pro-veganism comment in full in a post, don’t worry it will be complimentary, no I can’t show you the post you might bias it” would be stressful.

EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem

Introduction

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its reference category, and unjustified in that it is far from meeting its own standards. We’ve already seen dire consequences of the inability to detect bad actors who deflect investigation into potential problems, but by its nature you can never be sure you’ve found all the damage done by epistemic obfuscation because the point is to be self-cloaking. 

My concern here is for the underlying dynamics of  EA’s weak epistemic immune system, not any one instance. But we can’t analyze the problem without real examples, so individual instances need to be talked about. Worse, the examples that are easiest to understand are almost by definition the smallest problems, which makes any scapegoating extra unfair. So don’t.

This post focuses on a single example: vegan advocacy, especially around nutrition. I believe vegan advocacy as a cause has both actively lied and raised the cost for truthseeking, because they were afraid of the consequences of honest investigations. Occasionally there’s a consciously bad actor I can just point to, but mostly this is an emergent phenomenon from people who mean well, and have done good work in other areas. That’s why scapegoating won’t solve the problem: we need something systemic. 

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

Definitions

I picked the words “vegan advocacy” really specifically. “Vegan” sometimes refers to advocacy and sometimes to just a plant-exclusive diet, so I added “advocacy” to make it clear.

I chose “advocacy” over “advocates” for most statements because this is a problem with the system. Some vegan advocates are net truthseeking and I hate to impugn them. Others would like to be epistemically virtuous but end up doing harm due to being embedded in an epistemically uncooperative system. Very few people are sitting on a throne of plant-based imitation skulls twirling their mustache thinking about how they’ll fuck up the epistemic commons today. 

When I call for actions I say “advocates” and not “advocacy” because actions are taken by people, even if none of them bear much individual responsibility for the problem. 

I specify “EA vegan advocacy” and not just “vegan advocacy” not because I think mainstream vegan advocacy is better, but because 1. I don’t have time to go after every wrong advocacy group in the world. 2. Advocates within Effective Altruism opted into a higher standard. EA has a right and responsibility to maintain the standards of truth it advocates, even if the rest of the world is too far gone to worry about. 

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

How EA vegan advocacy has hindered truthseeking

EA vegan advocacy has both pushed falsehoods and punished people for investigating questions it doesn’t like. It manages this even for positions that 90%+ of effective altruism and the rest of the world agree with, like “veganism is a constraint”. I don’t believe its arguments convince anyone directly, but end up having a big impact by making inconvenient beliefs too costly to discuss. This means new entrants to EA are denied half of the argument, and harm themselves due to ignorance.

This section outlines the techniques I’m best able to name and demonstrate. For each technique I’ve included examples. Comments on my own posts are heavily overrepresented because they’re the easiest to find; “go searching through posts on veganism to find the worst examples” didn’t feel like good practice. I did my best to quote and summarize accurately, although I made no attempt to use a representative sample. I think this is fair because a lot of the problem lies in the fact that good comments don’t cancel out bad, especially when the good comments are made in parallel rather than directly arguing with the bad. I’ve linked to the source of every quote and screen shot, so you can (and should) decide for yourself. I’ve also created a list of all of my own posts I’m drawing from, so you can get a holistic view. 

My posts:

I should note I quote some commenters and even a few individual comments in more than one section, because they exhibit more than one problem. But if I refer to the same comment multiple times in a row I usually only link to it once, to avoid implying more sources than I have. 

My posts were posted on my blog, LessWrong, and EAForum. In practice the comments I drew from came from LessWrong (white background) and EAForum (black background).  I tried to go through those posts and remove all my votes on comments (except the automatic vote for my own comments) so that you could get an honest view of how the community voted without my thumb on the scale, but I’ve probably missed some, especially on older posts. On the main posts, which received a lot of traffic, I stuck to well-upvoted comments, but I included some low (but still positive) karma comments from unpopular posts. 

The goal here is to make these anti-truthseeking techniques legible for discussion, not develop complicated ways to say “I don’t like this”, so when available I’ve included counter examples. These are comments that look similar to the ones I’m complaining about, but are fine or at least not suffering from the particular flaw in that section. In doing this I hope to keep the techniques’ definitions narrow.

Active suppression of inconvenient questions

A small but loud subset of vegan advocacy will say outright you shouldn’t say true things, because it leads to outcomes they dislike. This accusation is even harsher than “not truthseeking”, and would normally be very hard to prove. If I say “you’re saying that because you care more about creating vegans than the health of those you create”, and they say “no I’m not”, I don’t really have a come back. I can demonstrate that they’re wrong, but not their motivation. Luckily, a few people said the quiet part out loud. 

Commenter Martin Soto pushed back very hard on my first nutrition testing study. Finally I asked him outright if he thought it was okay to share true information about vegan nutrition. His response was quite thoughtful and long, so you should really go read the whole thing, but let me share two quotes

He goes on to say:

And in a later comment

EDIT 2023-10-03: Martin disputes my summary of his comments. I think it’s good practice to link to disputes like this, even though I stand by my summary. I also want to give a heads-up that I see his comments in the dispute thread as continuing the patterns I describe (which makes that thread a tax on the reader). If you want to dig into this, I strongly suggest you first read his original comments and come up with your own summary, so you can compare that to each of ours.

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition. There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”. Most people flinch away from explicit trade-offs like that, and I appreciate that he did them and own the conclusion. But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

EDIT 2023-10-04: . Using Faunalytics numbers for self-reported health issues and improvements after quitting veg*nism, I calculated that 20% of veg*ns develop health issues. This number is sensitive to your assumptions; I consider 20% conservative but it could be an overestimate. I encourage you to read the whole post and play with my model, and of course read the original work.  

Most people aren’t nearly this upfront. They will go through the motions of calling an idea incorrect before emphasizing how it will lead to outcomes they dislike. But the net effect is a suppression of the exploration of ideas they find inconvenient. 

This post on Facebook is a good example. Normally I would consider facebook posts out of bounds, especially ones this old (over five years). Facebook is a casual space and I want people to be able to explore ideas without being worried that they’re creating a permanent record that will be used against them.  In this case I felt that because the post was permissioned to public and a considered statement (rather than an off the cuff reply), the truth value outweighed the chilling effect. But because it’s so old and I don’t know how the author’s current opinion, I’m leaving out their name and not linking to the post. 

The author is midlist EA- I’d heard of them for other reasons, but they’re certainly not EA-famous. 

There are posts very similar to this one I would have been fine with, maybe even joyful about. You could present evidence against the claims that X is harmful, or push people to verify things before repeating them, or suggest we reserve the word poison for actual kill-you-dead molecules and not complicated compound constructions with many good parts and only weak evidence of mild long-term negative effects. But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you’d expect if the health claims were true but inconvenient. I don’t know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

A subtler version comes from the AHS-2 post. At the time of this comment the author, Rockwell, described herself as the leader of EA NYC and an advisor to philanthropists on animal suffering, so this isn’t some rando having a feeling. This person has some authority.

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient. And if they’d written that counter-argument they promised I’d be putting this in the counter-examples section. But it’s been three months and they have not written anything where I can find it, nor responded to my inquiries. So even if literal claim were correct, she’s using a technique whose efficacy is independent of truth. 

Over on the Change My Mind post the top comment says that vegan advocacy is fine because it’s no worse than fast food or breakfast cereal ads

I’m surprised someone would make this comment. But what really shocks me is the complete lack of pushback from other vegan advocates. If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them. 

Counter-Examples

This post on EAForum came out while I was finishing this post. The author asks if they should abstain from giving bad reviews to vegan restaurants, because it might lead to more animal consumption- which would be a central example of my complaint. But the comments are overwhelmingly “no, there’s not even a good consequentialist argument for that”, and the author appears to be taking that to heart. So from my perspective this is a success story.

Ignore the arguments people are actually making

I’ve experienced this pattern way too often.

Me: goes out of my way to say not-X in a post
Comment: how dare you say X! X is so wrong!
Me: here’s where I explicitly say not-X.
*crickets*

This is by no means unique to posts about veganism. “They’re yelling at me for an argument I didn’t make” is a common complaint of mine. But it happens so often, and so explicitly, in the vegan nutrition posts. Let me give some examples.

My post:

Commenter:

My post:

Commenters:

My post:

Commenter: 

My post: 

Commenter:

My post:

Commenter:

You might be thinking “well those posts were very long and honestly kind of boring, it would be unreasonable to expect people to read everything”. But the length and precision are themselves a response to people arguing with positions I don’t hold (and failing to update when I clarify). The only things I can do are spell out all of my beliefs or not spell out all of my beliefs, and either way ends with comments arguing against views I don’t have. 

Frame control/strong implications not defended/fuzziness

This is the hardest one to describe. Sometimes people say things, and I disagree, and we can hope to clarify that disagreement. But sometimes people say things and responding is like nailing jello to a wall. Their claims aren’t explicit, or they’re individually explicit but aren’t internally consistent, or play games with definitions. They “counter” statements in ways that might score a point in debate club but don’t address the actual concern in context. 

One example is the top-voted comment on LW on Change My Mind

Over a very long exchange I attempt to nail down his position: 

  • Does he think micronutrient deficiencies don’t exist? No, he agrees they do.
  • Does he think that they can’t cause health issues? No, he agrees they do.
  • Does he think this just doesn’t happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

So what exactly does he disagree with me on? 

He also had a very interesting exchange with another commenter. That thread got quite long, and fuzziness by its nature doesn’t lend itself to excerpts, so you should read the whole thing, but I will share highlights. 

Before the screenshot: Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it’s fine because if people get tired they can go to a doctor.

That reply doesn’t contain any false statements, and would be perfectly reasonable if we were talking about ER triage protocols. But it’s irrelevant when the conversation is “can we count on veganism-induced fatigue being caught?”. (The answer is no, and only some of the reasons have been brought up here)

You can see how the rest of this conversation worked out in the Sound and Fury section.

A much, much milder example can be seen in What vegan food resources have you found useful?. This was my attempt to create something uncontroversially useful, and I’d call it a modest success. The post had 20-something karma on LW and EAForum, and there were several useful-looking resources shared on EAForum. But it also got the following comment on LW: 

I picked this example because it only takes a little bit of thought to see the jujitsu, so little it barely counts. He disagreed with my implicit claim that… well okay here’s the problem. I’m still not quite sure where he disagrees. Does he think everyone automatically eats well as a vegan? That no one will benefit from resources like veganhealth.org? That no one will benefit from a cheat sheet for vegan party spreads? That there is no one for whom veganism is challenging? He can’t mean that last one because he acknowledges exceptions in his later comment, but only because I pushed back. Maybe he thinks that the only vegans who don’t follow his steps are those with medical issues, and that no-processed-food diets are too unpopular to consider? 

I don’t think this was deliberately anti-truthseeking, because if it was he would have stopped at “nothing special” instead of immediately outlining the special things his partner does. That was fairly epistemically cooperative. But it is still an example of strong claims made only implicitly. 

Counter-Examples

I think this comment makes a claim (“vegans moving to naive omnivorism will hurt themselves”) clearly, and backs it up with a lot of details.

The tone is kind of obnoxious and he’s arguing with something I never claimed, but his beliefs are quite clear. I can immediately understand which beliefs of his I agree with (“vegans moving to naive omnivorism will hurt themselves” and “that would be bad”) and make good guesses at implicit claims I disagree with (“and therefore we should let people hurt themselves with naive veganism”? “I [Elizabeth] wouldn’t treat naive mass conversion to omnivorism seriously as a problem”?). That’s enough to count as epistemically cooperative.

Sound and fury, signifying no substantial disagreement 

Sometimes someone comments with an intense, strongly worded, perhaps actively hostile, disagreement. After a laborious back and forth, the problem dissolves: they acknowledge I never held the position they were arguing with, or they don’t actually disagree with my specific claims. 

Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.

The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.

I do feel like vegan advocates are entitled to a certain amount of defensiveness. They encounter large amounts of concern trolling and outright hostility, and it makes sense that that colors their interactions. But that allowance covers one comment, maybe two, not three to eight (Wilkox, depending on which ones you count). 

For example, I’ve already quoted Wilkox’s very fuzzy comment (reminder: this was the top voted comment on that post on LW). That was followed by a 13+ comment exchange in which we eventually found he had little disagreement with any of my claims about vegan nutrition, only the importance of these facts. There really isn’t a way for me to screenshot this: the length and lack of specifics is the point.

You could say that the confusion stemmed from poor writing on my part, but:

I really appreciate the meta-honesty here, but since the exchange appears to have eaten hours of both of our time just to dig ourselves out of a hole, I can’t get that excited about it. 

Counter-Examples

I want to explicitly note that Sound and Fury isn’t the same as asking questions or not understanding a post. E.g. here Ben West identifies a confusion, asks me, and accepts both my answer and an explanation of why answering is difficult. 

Or in that same post, someone asked me to define nutritionally dense. It took a bit for me to answer and we still disagreed afterward, but it was a great question and the exchange felt highly truthseeking.  

Bad sources, badly handled 

Citations should be something of a bet: if the citation (the source itself or your summary of it) is high quality and supports your point, that should move people closer to your views. But if they identify serious relevant flaws, that should move both you and your audience closer to their point of view. Of course our beliefs are based on a lot of sources and it’s not feasible or desirable to really dig into all of them for every disagreement, so the bet may be very small. But if you’re not willing to defend a citation, you shouldn’t make it.

What I see in EA vegan advocacy is deeply terrible citations, thrown out casually, and abandoned when inconvenient. I’ve made something of a name for myself checking citations and otherwise investigating factual claims from works of nonfiction. Of everything I’ve investigated, I think citations from EA vegan advocacy have the worst effort:truth ratio. Not outright more falsehoods, I read some pretty woo stuff, but those can be dismissed quickly. Citations in vegan advocacy are often revealed to be terrible only after great effort.

And having put in that effort, my reward is usually either crickets, or a new terrible citation. Sometimes we will eventually drill into “I just believe it”, which is honestly fine. We don’t live our lives to the standard of academic papers. But if that’s your reason, you need to state it from the beginning. 

For example, in the top voted comment on Change My Mind post on EAF,  Rockwell (head of EA NYC) has five links in her post. Only links 1 and 4 are problems, but I’ll describe them in order to avoid confusion.

Of the five links: 

  1. Wilkox’s comment on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn’t happened at the time of linking).
  2. cites my past work, if anything too generously.
  3. an estimation of nutrient deficiency in the US. I don’t love that this uses dietary intake as opposed to testing values (people’s needs vary so wildly), but at least it used EAR and not RDA. I’d want more from a post but for a comment this is fine.
  4. an absolutely atrocious article, which the comment further misrepresents. We don’t have time to get all the flaws in that article, so I’ve put my first hour of criticisms in the appendix. What really gets me here is that I would have agreed the standard American diet sucks without asking for a source. I thought I had conceded that point preemptively, albeit not naming Standard American Diet explicitly.

    And if she did feel a need go the extra mile on rigor for this comment, it’s really not that hard to find decent-looking research about the harms of the Standard Shitty American Diet.  I found this paper on heart disease in 30 seconds, and most of that time was spent waiting for Elicit to load. I don’t know if it’s actually good, but it is not so obviously farcical as the cited paper.
  5. The fifth link goes to a description of the Standard American Diet. 

Rockwell did not respond to my initial reply (that fixing vegan issues is easier than fixing SSAD), or my asking if that paper on the risks of meat eating was her favorite.

A much more time-consuming version of this happened with Adventist Health Study-2. Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism). There’s one commenter on LessWrong and two on EAForum (one of whom had previously co-authored a blog post on the study and offered to answer questions). As I discussed here, that study is one of the best we have on nutrition and I’m very glad people brought it to my attention. But calling it a pseudo-RCT that supports veganism is deeply misleading. It is nowhere near randomized, and doesn’t cleanly support veganism even if you pretend it is.

(EDIT 2023-10-03: To be clear, the noise in the study overwhelms most differences in outcomes, even ignoring the self-sorting. My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported. One commenter has said she only meant it as evidence that a vegan diet can work for some people, which I agree with, as stated in the post she was responding to. She disagrees with other parts of my summary as well, you can read more here)

It’s been three months, and none of the recommenders have responded to my analysis of the main AHS-2 paper, despite repeated requests. 

But finding a paper is of lower quality and supports an entirely different conclusion is still not the worst-case scenario. The worst outcome is citation whack-a-mole.

A good example of this is from the post “Getting Cats Vegan is Possible and Imperative”, by Karthik Sekar. Karthik is a vegan author and data scientist at a plant-based meat company. 

[Note that I didn’t zero out my votes on this post’s comments, because it seemed less important for posts I didn’t write]

Karthik cites a lot of sources in that post. I picked what looked like his strongest source and investigated. It was terrible. It was a review article, so checking it required reading multiple studies. Of the cited studies, only 4  (with a total of 39 combined subjects) use blood tests rather than owner reports, and more than half of those were given vegetarian diets, not vegan (even though the table header says vegan). The only RCT didn’t include carnivorous diets. 

Karthik agrees that paper (that he cited) is not making its case “strong nor clear”, and cites another one (which AFAICT was not in the original post).

I dismiss the new citation on the basis of “motivated [study] population and minimal reporting”. 

He retreats to “[My] argument isn’t solely based on the survey data. It’s supported by fundamentals of biochemistry, metabolism, and digestion too […] Mammals such as cats will digest food matter into constituent molecules. Those molecules are chemically converted to other molecules–collectively, metabolism–, and energy and biomass (muscles, bones) are built from those precursors. For cats to truly be obligate carnivores, there would have to be something exceptional about meat: (A) There would have to be essential molecules–nutrients–that cannot be sourced anywhere else OR (B) the meat would have to be digestible in a way that’s not possible with plant matter. […So any plant-based food that passes AAFCO guidelines is nutritionally complete for cats. Ami does, for example.]

I point out that AAFCO doesn’t think meeting their guidelines is necessarily sufficient. I expected him to dismiss this as corporate ass-covering, and there’s a good chance he’d be right. But he didn’t.

Finally, he gets to his real position:

Which would have been a fine aspirational statement, but then why include so many papers he wasn’t willing to stand behind? 

On that same post someone else says that they think my concerns are a big deal, and Karthik probably can’t convince them without convincing me. Karthik responds:

So he’s conceded that his study didn’t show what he claimed. And he’s not really defending the AAFCO standards. But he’s really sure this will work anyway? And I’m the one who won’t update their beliefs. 

In a different comment the same someone else notes a weird incongruity in the paper. Karthik doesn’t respond.

This is the real risk of the bad sources: hours of deep intellectual work to discover that his argument boils down to a theoretical claim the author could have stated at the beginning. “I believe vegan cat food meets these numbers and meeting these numbers is sufficient” honestly  isn’t a terrible argument, and I’d have respected it plainly stated, especially since he explicitly calls for RCTs. Or I would, if he didn’t view those RCTs primarily as a means to prove what he already knows.  

Counter-Examples

This commenter starts out pretty similarly to the others, with a very limited paper implied to have very big implications. But when I push back on the serious limitations of the study, he owns the issues and says he only ever meant the paper to support a more modest claim (while still believing the big claim he did make?). 

Taxing Facebook

When I joined EA Facebook in 2014, it was absolutely hopping. Every week I met new people and had great discussions with them where we both walked away smarter. I’m unclear when this trailed off because I was drifting away from EA at the same time, but let’s say the golden age was definitively over by 2018. Facebook was where I first noticed the pattern with EA vegan advocacy. 

Back in 2014 or 2015, Seattle EA watched some horrifying factory farming documentaries, and we were each considering how we should change our diets in light of that new information. We tried to continue the discussion on Facebook, only to have Jacy Reese Anthis (who was not a member of the local group and AFAIK had never been to Seattle) repeatedly insist that the only acceptable compromise was vegetarianism, humane meat doesn’t exist, and he hadn’t heard of health conditions benefiting from animal products so my doctor was wrong (or maybe I made it up?). 

I wish I could share screenshots on this, but the comments are gone (I think because the account has been deleted). I’ve included shots of the post and some of my comments (one of which refers to Jacy obstructing an earlier conversation, which I’d forgotten about). A third commenter has been cropped out, but I promise it doesn’t change the context.

(his answer was no, and that either I or my doctor were wrong because Jacy had never heard of any medical issue requiring consumption of animal products)

That conversation went okay. Seattle EA discussed suffering math on different vertebrates, someone brought up eating bugs, Brian Tomasik argued against eating bugs. It was everything an EA conversation should be.

But it never happened again.

Because this kind of thing happened every time animal products, diet, and health came up anywhere on EA Facebook. The commenters weren’t always as aggressive as Jacy, but they added a tremendous amount of cumulative friction. An omnivore would ask if lacto-vegetarianism worked, and the discussion would get derailed by animal advocates insisting you didn’t need milk.  Conversations about feeling hungry at EAG inevitably got a bunch of commenters saying they were fine, as if that was a rebuttal. 

Jeff Kaufman mirrors his FB posts onto his actual blog, which makes me feel more okay linking to it. In this post he makes a pretty clear point- that veganism can be any of cheaper, or healthier, or tastier, but not all at once.  He gets a lot of arguments. One person argues that no one thinks that, they just care about animals more. 

One vegetarian says they’d like to go vegan but just can’t beat eggs for their mix of convenience, price, macronutrients, and micronutrients. She gets a lot of suggestions for substitutes, all of which flunk on at least one criterion.  Jacy Reese Anthis has a deleted comment, which from the reply looks like he asserted the existence of a substitute without listing one. 

After a year or two of this, people just stopped talking about anything except the vegan party line on public FB. We’d bitch to each other in private, but that was it. And that’s why, when a new generation of people joined EA and were exposed to the moral argument for veganism, there was no discussion of the practicalities visible to them. 

[TBF they probably wouldn’t have seen the conversations on FB anyway, I’m told that’s an old-person thing now. But the silence has extended itself]

Ignoring known falsehoods until they’re a PR problem

This is old news, but: for many years ACE said leafletting was great. Lots of people (including me and some friends, in 2015) criticized their numbers. This did not seem to have much effect; they’d agree their eval was imperfect and they intended to put up a disclaimer, but it never happened.

In late 2016 a scathing anti-animal-EA piece was published on Medium, making many incendiary accusations, including that the leafleting numbers are made up. I wouldn’t call that post very epistemically virtuous; it was clearly hoping to inflame more than inform. But within a few weeks (months?), ACE put up a disavowal of the leafletting numbers.

I unfortunately can’t look up the original correction or when they put it up; archive.org behaves very weirdly around animalcharityevaluators.org. As I remember it made the page less obviously false, but the disavowal was tepid and not a real fix. Here’s the 2022 version:

There are two options here: ACE was right about leafleting, and caved to public pressure rather than defend their beliefs. Or ACE was wrong about leafleting (and knew they were wrong, because they conceded in private when challenged) but continued to publicly endorse it.

Why I Care

I’ve thought vegan advocates were advocating falsehoods and stifling truthseeking for years. I never bothered to write it up, and generally avoided public discussion, because that sounded like a lot of work for absolutely no benefit. Obviously I wasn’t going to convince the advocates of anything, because finding the truth wasn’t their goal, and everyone else knew it so what did it matter? I was annoyed at them on principle for being wrong and controlling public discussion with unfair means, but there are so many wrong people in the world and I had a lot on my plate. 

I should have cared more about the principle.

I’ve talked before about the young Effective Altruists who converted to veganism with no thought for nutrition, some of whom suffered for it. They trusted effective altruism to have properly screened arguments and tell them what they needed to know. After my posts went up I started getting emails from older EAs who weren’t getting the proper testing either; I didn’t know because I didn’t talk to them in private, and we couldn’t discuss it in public. 

Which is the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed. 

What do EA vegan advocates need to do?

  1. Acknowledge that nutrition is a multidimensional problem, that veganism is a constraint, and that adding constraints usually makes problems harder, especially if you’re already under several.
  2. Take responsibility for the nutritional education of vegans you create. This is not just an obligation, it’s an opportunity to improve the lives of people who are on your side. If you genuinely believe veganism can be nutritionally whole, then every person doing it poorly is suffering for your shared cause for no reason.
    1. You don’t even have to single out veganism. For purposes of this point I’ll accept “All diet switches have transition costs and veganism is no different, and the long term benefits more than compensate”. I don’t think your certainty is merited, and I’ll respect you more if you express uncertainty, but I understand that some situations require short messaging and am willing to allow this compression.
  3. Be epistemically cooperative, at least within EA spaces. I realize this is a big ask because in the larger world people are often epistemically uncooperative towards you. But obfuscation is a symmetric weapon and anger is not a reason to believe someone. Let’s deescalate this arms race and have both sides be more truthseeking.

    What does epistemic cooperation mean?
    1. Epistemic legibility. Make your claims and cruxes clear. E.g. “I don’t believe iron deficiency is a problem because everyone knows to take supplements and they always work” instead of “Why are you bothering about iron supplements?”
    2. Respond to the arguments people actually make, or say why you’re not. Don’t project arguments from one context onto someone else. I realize this one is a big ask, and you have my blessing to go meta and ask work from the other party to make this viable, as long as it’s done explicitly. 
    3. Stop categorically dismissing omnivores’ self-reports. I’m sure many people do overestimate the difficulties of veganism, but that doesn’t mean it’s easy or even possible for everyone.
      1. A scientific study, no matter how good, does not override a specific person telling you they felt hungry at a specific time. 
    4. If someone makes a good argument or disproves your source, update accordingly. 
  4. Police your own. If someone makes a false claim or bad citation while advocating veganism, point it out. If someone dismisses a detailed self-report of a failed attempt at veganism, push back. 

All Effective Altruists need to stand up for our epistemic commons

Effective Altruism is supposed to mean using evidence and reason to do the most good. A necessary component of that is accurate evidence. All the spreadsheets and moral math in the world mean nothing if the input is corrupted. There can be no consequentialist argument for lying to yourself or allies1 because without truth you can’t make accurate utility calculations2. Garbage in, garbage out.

One of EA’s biggest assets is an environment that rewards truthseeking more than average. Without uniquely strong truthseeking, EA is just another movement of people who are sure they’re right. But high truthseeking environments are fragile, exploiting them is rewarding, and the costs of violating them are distributed and hard to measure. The only way EA’s has a chance of persisting is if the community makes preserving it a priority. Even when it’s hard, even when it makes people unhappy, and even when the short term rewards of defection are high. 

How do we do that? I wish I had a good answer. The problem is complicated and hard to reason about, and I don’t think we understand it enough to fix it. Thus far I’ve focused on vegan advocacy as a case study in destruction of the epistemic commons because its operations are relatively unsophisticated and easy to understand. Next post I’ll be giving more examples from across EA, but those will still have a bias towards legibility and visibility. The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet. 


Acknowledgments

Thanks to the many people I’ve discussed this with over the past few months. 

Thanks to Patrick LaVictoire and Aric Floyd for beta reading this post.

Thanks to Lightspeed Grants for funding this work. Note: a previous post referred to my work on nutrition and epistemics as unpaid after a certain point. That was true at the time and I had no reason to believe it wouldn’t stay true, but Lightspeed launched a week after that post and was an unusually good fit so I applied. I haven’t received a check yet but they have committed to the grant so I think it’s fair to count this as paid. 

Appendix

Terrible anti-meat article

  • The body of the paper is an argument between two people, but the abstract only includes the anti-animal-product side.
  • The “saturated fat” and “cholesterol” sections take as a given that any amount of these is bad, without quantifying or saying why. 
  • The “heme iron” section does explain why excess iron is bad, but ignores the risks of too little. Maybe he also forgot women exist? 
  • The lactose section does cite two papers, one of which does not support his claim, and the other of which is focused on mice who received transplants. It probably has a bunch of problems but it was too much work to check, and even if it doesn’t, it’s about a niche group of mice. 
  • The next section claims milk contains estrogen and thus raises circulating estrogen, which increases cancer risk.
    • It cites one paper supporting a link with breast cancer. That paper found a correlation with high fat but not low fat dairy, and the correlation was not statistically significant. 
    • It cites another paper saying dairy impairs sperm quality. This study was done at a fertility clinic, so will miss men with healthy sperm counts and is thus worthless. Ignoring that, it found a correlation of dairy fat with low sperm count, but low-fat dairy was associated with higher sperm count. Again, among men with impaired fertility. 
  • The “feces” section says that raw meat contains harmful bacteria (true), but nothing about how that translates to the risks of cooked meat.

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.] 

  1. ”Are you saying I can’t lie to Nazis about the contents of my attic?” No more so than you’re banned from murdering them or slashing their tires. Like, you should probably think hard about how it fits into your strategy, but I assumed “yourself or allies” excluded Nazis for everyone reading this. 

    “Doesn’t that make the definition of enemies extremely morally load bearing?” It reflects that fact, yes. 

    “So vegan advocates can morally lie as long as it’s to people they consider enemies?”  I think this is, at a minimum, defensible and morally consistent. In some cases I think it’s admirable, such as lying to get access to a slaughterhouse in order to take horrifying videos. It’s a declaration of war, but I assume vegan advocates are proud to declare the meat industry their enemy. ↩︎
  2. I’ll allow that it’s conceptually possible to make deontological or virtue ethics arguments for lying to yourself or allies, but it’s difficult, and the arguments are narrow and/or wrong. Accurate beliefs turn out to be critical to getting good outcomes in all kinds of situations.  ↩︎

Edits

You will notice a few edits in this post, which are marked with the edit date. The original text is struck through.

When I initially published this post on 2023-09-28, several images failed to copy over from the google doc to the shitty WordPress editor. These were fixed within a few hours.

I tried to link to sources for every screenshot (except the Facebook ones). On 2023-10-05 I realized that a lot of the links were missing (but not all, which is weird) and manually added them back in. In the process I found two screenshots that never had links, even in the google doc, and fixed those. Halfway through this process the already shitty editor flat out refused to add links to any more images. This post is apparently already too big for WordPress to handle, so every attempted action took at least 60 seconds, and I was constantly afraid I was going to make things worse, so for some images the link is in the surrounding text. 

If anyone knows of a blogging website that will gracefully accept cut and paste from google docs, please let me know. That is literally all an editor takes to be a success in my book and last time I checked I could not find a single site that managed it.

Adventist Health Study-2 supports pescetarianism more than veganism

Or: how the Adventist Health Study-2 had a pretty good study design but was oversold in popular description, and then misrepresented its own results.

When I laid out my existing beliefs on veganism and nutrition I asked people for evidence to the contrary. By far the most promising thing people shared was the 7th Day Adventist Health Studies. I got very excited because the project promised something of a miracle in nutrition science: an approximate RCT. I read the paper that included vegan results, and while it’s still very good as far as nutrition studies go it’s well below what I was promised, and the summaries I read were misleading. It’s not a pseudo-RCT, and even if you take the data at face value (which you shouldn’t) it doesn’t show a vegan diet is superior to all others (as measured by lifespan). Vegan is at best tied with pescetarianism, and in certain niche cases (e.g. being a woman) it’s the worst choice besides unconstrained meat consumption. 

I’m going to try not to be too sarcastic about this, the study really is very good data by nutrition science standards, but I have a sour spot for medical papers that say “people” when they mean “men”, so probably something will leak out.  Also, please consider what the state of nutrition science must be to make me call something that made this mistake “very good”.

Background

The 7th Day Adventists are a fairly large Christian sect. For decades scientists have been recruiting huge cohorts to study their diet, and publishing a lot of papers.

The Adventists are a promising group to use to study nutrition for lots of reasons, but primarily because the Church discourages meat, smoking, and drinking. So you lose the worst confounders for health, and get a population of lifelong, culturally competent vegetarians, which is a pretty good deal. Total abstinence from meat isn’t technically required – you’re allowed to eat kosher meat – but it’s heavily discouraged. 7DA colleges only serve vegetarian meals, and church meals will typically be vegetarian.

Some popular descriptions say that rules vary between individual churches, which could give you an almost RCT effect. AFAICT this isn’t true, and the paper I read never claimed it was. Both the internet and my ex-Adventist friend say that individual churches within the US (where the study took place)  vary a little in recommendations, but most of the variety in diet is based on individual choice, not local church rules. 

I was really hoping this project could shed light on what happened to people who had medical difficulties with plant-exclusive diets in plant-exclusive food cultures, but they didn’t try, and from the abundance of meat eaters within 7DA I’d guess the answer is that they eat animal products. Nor is the study very informative about naively switching to a plant-exclusive diet, since most members grow up in the culture and will have been taught a reasonable plan-based diet without necessarily needing to consciously think about it. 

The Adventist Health Studies program has produced a lot of papers over the years. I focused on Vegetarian Dietary Patterns and Mortality in Adventist Health Study 2, because someone commented on my last post and pointed to that paper as addressing veganism in particular (other papers look like they only consider vegetarianism, although they might be using it as an umbrella term). I also read the AHS-2 cohort profile, but not any of the other papers, due to time constraints. It’s possible I will raise questions those papers would have answered, but there are at least 15 papers and, as I’ll talk about later, I wasn’t feeling hopeful. 

So let’s talk about that one paper. It breaks people down into 2 main categories, one of which has 4 subtypes. The big category is “nonvegetarian”, which they defined as eating meat more than once per week (48% of the sample). If you eat meat less than or equal to once a week you qualify for some category of vegetarian. More specifically:

  • Vegan (8%): consumes any animal product <1/month.
  • Lacto-ovo-vegetarian (29%): unlimited consumption of eggs and milk, meat < 1/month.
  • Pesco-vegetarian (10%): unlimited fish, eggs, and milk, other meat < 1/month
  • Semi-vegetarian (6%): unlimited eggs and milk, meat+fish combined > 1/month but no more than 1/week.

I’m already a little 🤨 at lumping all four of those into one category, much less calling it vegetarian, but it will get worse. 

I wish we knew more about the specific diets of the nonvegetarians (although not enough to trawl through 20 papers hoping to find that data). Do they eat pork or shellfish, which violates Church requirements and would thus be a great marker for low conscientiousness? Are they eating Standard Shitty American Diet, or similar to the lacto-ovo-vegetarians but with meat 5x/month? 

Statistics

In this sample, eating lots of meat is clearly correlated with multiple activities known to impair health, like smoking and drinking. The paper attempts to back out those effects, but that’s impossible to do fully. You can back out 100 things and still miss the effect of traits that makes doing all those things more or less likely. That’s how you get results like “theater attendance reduces the risk of death even after controlling for a laundry list of confounders”.

The mortality results in this paper are controlled for: age, smoking, exercise, income, education level, marital status, alcohol consumption, and sleep.  In women they also adjusted for menopausal status and hormone therapy (vegans were ½ to ⅓ less likely to be on hormone therapy, compared to other groups).

What kinds of things does this leave out? Conscientiousness, religiosity (which has social implications), and overall concern for health (anything with enough of a cultural “health halo” will eventually show a correlation with long life, because it will be done more by people who care the most about health, and they’re likely also doing other things that help). It will also be confounded if poor health is the thing that causes people to consume more animal products. 

My statistician described their methods of backing out the confounding effects as “This isn’t what I would have done but it doesn’t seem unreasonable”, which easily puts this in the top 10% of medical papers I’ve asked him to evaluate. He didn’t scream in pain even a little bit. I can’t check their actual math without the raw data, but for purposes of this post I feel comfortable assuming they correctly adjusted for everything they listed.

Claims of vegetarianism’s benefits are greatly exaggerated

Note: most of these results don’t reach the level of statistical significance, but let’s ignore that for now. I’m also going to ignore concerns they failed to fully back out health effects with non-dietary causes, because even if they did, the data doesn’t support their own conclusions, much less the idea veganism is superior. 

The abstract says “Significant associations with vegetarian diets were detected for cardiovascular mortality, noncardiovascular noncancer mortality, renal mortality, and endocrine mortality. Associations in men were larger and more often significant than were those in women.”

Sure, if you count “unlimited fish” as vegetarian. 

But it’s much worse than that. If you look at men, you see veganism’s hazard ratio is tied with pescetarianism, followed by other forms of vegetarianism, and then nonvegetarian. Seems unfair to quote this as total vindication for veganism, although it is definitely a blow against unrestricted meat eating.

But then we get to that niche group, women. This data is a little noisy because women only made up ⅔ of the sample, but you can nonetheless get a faint hint that veganism is barely distinguishable from unrestricted omnivorism, and the diet correlated with the lowest death rate is pescetarianism, with lacto-ovo-vegetarianism and semi-vegetarianism somewhere in the middle.

And again, that’s treating the confidence intervals as minimal, when in reality they heavily overlap with each other and with the nonvegetarian death rate. 

The extra weird thing here is that in men veganism was most helpful against cardiac issues, whereas in women it appears to be actively harmful to cardiac health. Any benefit veganism has in women comes from the “other death” category, whereas in men the “other death” category is where it loses ground against pescetarianism. 

The paper describes this as “Effects were generally stronger and more significant in men than women”, which is a weird way to say “women and men had very different results”. 

Why the gender gap?

Could be any number of things. Maybe nonvegetarian women had healthier diets than men in the same category (they cite another paper that checked and said there were no “striking differences” between the sexes within a given category, but I’m not feeling very trusting right now).  Maybe nutritional intake has a bigger impact on women due to menstruation and pregnancy. Maybe women were more likely to use veganism as cover for an eating disorder. Maybe they did the math wrong.  

Fish seem pretty good though

If you’re going to conclude anything from these papers, it’s that fish are great. At least as good as veganism in men, and better in women. I’m more inclined to trust that result than I otherwise would be, because pescetarianism has less of a health halo around it, as witnessed by seafood eaters having roughly the same prevalence of bad habits like smoking and drinking, relative to lacto-ovo-vegetarians and semi-vegetarians. My ex-Adventist friend confirms that veganism is viewed more favorably than pescetarian or semi-vegetarian in the community, although not compared to vegetarianism, which surprised me. 

So the benefits of pescetarianism are less likely to be downstream of being the choice of people who care a lot about health, or are highly consceintious. I briefly got motivated to eat more fish until I remembered I count as semi-vegetarian by their standards (and I’m female), so the gains are small even if you take the results at face value.

Nutrition is still really complicated, the study still has a bunch of flaws, I wouldn’t update too much on this even if it didn’t agree with my existing beliefs. But this clearly undercuts veganism as the healthiest choice for women, and doesn’t really support it for men either. 

Change my mind: Veganism entails trade-offs, and health is one of the axes

Introduction

To me, it is obvious that veganism introduces challenges to most people. Solving the challenges is possible for most but not all people, and often requires trade-offs that may or may not be worth it.  I’ve seen effective altruist vegan advocates deny outright that trade-offs exist, or more often imply it while making technically true statements. This got to the point that a generation of EAs went vegan without health research, some of whom are already paying health costs for it, and I tentatively believe it’s harming animals as well. 

Discussions about the challenges of veganism and ensuing trade-offs tend to go poorly, but I think it’s too important to ignore. I’ve created this post so I can lay out my views as legibly as possible, and invite people to present evidence I’m wrong. 

One reason discussions about this tend to go so poorly is that the topic is so deeply emotionally and morally charged. Actually, it’s worse than that: it’s deeply emotionally and morally charged for one side in a conversation, and often a vague irritant to the other. Having your deepest moral convictions treated as an annoyance to others is an awful feeling, maybe worse than them having opposing but strong feelings. So I want to be clear that I respect both the belief that animals can suffer and the work advocates put into reducing that suffering. I don’t prioritize it as highly as you do, but I am glad you are doing (parts of) it.

But it’s entirely possible for it to be simultaneously true that animal suffering is morally relevant, and veganism has trade-offs for most people. You can argue that the trade-offs don’t matter, that no cost would justify the consumption of animals, and I have a section discussing that, but even that wouldn’t mean the trade-offs don’t exist. 

This post covers a lot of ground, and is targeted at a fairly small audience. If you already agree with me I expect you can skip most of this, maybe check out the comments if you want the counter-evidence. I have a section addressing potential counter-arguments, and probably most people don’t need to read my response to arguments they didn’t plan on making. Because I expect modular reading, some pieces of information show up in more than one section. Anyone reading the piece end to end has my apologies for that. 

However, I expect the best arguments to come from people who have read the entire thing, and at a minimum the “my cruxes” and “evidence I’m looking for” sections. I also ask you to check the preemptive response section for your argument, and engage with my response if it relates to your point. I realize that’s a long read, but I’ve spent hundreds of hours on this, including providing nutritional services to veg*ns directly, so I feel like this is a reasonable request. 

My cruxes

Below are all of the cruxes I could identify for my conclusion that veganism has trade-offs, and they include health:

  • People are extremely variable. This includes variation in digestion, tastes, time, money, cooking ability… 
  • Most people’s optimal diet includes small amounts of animal products, but people eat sub-optimally for lots of reasons and that’s their right. Averting animal suffering is a better reason to eat suboptimally than most. 
  • Average vegans and omnivores vary in multiple ways, so it’s complicated to compare diets. I think the relevant comparison healthwise is “the same person, eating vegan or omnivore” or “veganism vs. omnivorism, holding all trade-offs but one constant”.
  • For most omnivores who grew up in an omnivorous culture, going vegan requires a sacrifice in at least one of: cost, taste (including variety), health, time/effort.
    • This is a mix of capital investments and ongoing costs – you may need to learn a bunch of new recipes, but if they work for you that’s a one time cost.
    • Arguments often get bogged down around the fact that people rarely need to sacrifice on all fronts at once. There are cheap ways for (most) people to eat vegan, but they either take effort and knowledge, or they’re bad for you (Oreos are vegan). There are vegan ways for most people to be close to nutritionally optimal, but they require a lot of planning or dietary monotony.
    • Some of the financial advantage for omnivores is due to meat subsidies that make meat artificially cheap, but not all of it, and I don’t know how that compares to grain subsidies.
  • There are vegan sources of every nutrient (including B12, if you include fortified products). There may even be dense sources in every or almost every nutrient. But there isn’t a satisfying plant product that is as rich in as many things as meat, dairy, and especially eggs. Every “what about X?” has an answer, but if you add up all the foods you would need to meet every need, for people who aren’t gifted at digestion, it’s far too many calories and still fairly restrictive.
    • “Satisfying” matters. There are vegan protein shakes and cereals containing ~everything, but in practice most people don’t seem to find these satisfying.
    • There isn’t a rich vegan source of every vitamin for every person. If there are three vegan sources and you’re allergic to all of them, you need animal products.
    • The gap between veganism and omnivorism is shrinking over time, as fortified fake meats and fake milks get better and cheaper. But these aren’t a cure-all.
      • Some people don’t process the fortified micronutrients as well as they process meat (and vice-versa, but that’s irrelevant on an individual level).
      • Avoiding processed foods or just not liking them is pretty common, especially among the kind of people who become vegan. 
      • Brands vary pretty widely, so you still need to know enough to pick the right fortified foods.
      • Fake meats are quite expensive, although less so every year.
        • I want to give the people behind fake meat a lot of credit. Making meat easier to give up was a good strategy for animal protection advocates.
  • Veganism isn’t weird for having these trade-offs. Every diet has trade-offs. I can name many diets I rank as having worse average trade-offs than veganism or a lower ceiling on health.
    • Carnivore diet, any monotrophic diet, ultralow calorie diets under most circumstances, “breathetarian”, liquid diets under most circumstances, most things with “cleanse” or “detox” in the name, raw foodism…
    • And even then, several of these have someone for whom they’re the best option.
  • The trade-offs vary widely by person. Some people have the digestive ability and palate of a goat and will be basically fine no matter what. Some people are already eating monotonous, highly planned diets and removing animal products doesn’t make it any harder. Some people are already struggling to feed themselves on an omnivore diet, and have nothing to replace meat if you take it away.
    • Vegan athletes are often held up as proof veganism can be healthy, with the implication that feeding athletes is hard mode so if it works for them it must work for everyone. But being a serious athlete requires a lot of the same trade-offs as veganism: you’re already planning diets meticulously, optimized for health over taste, with little variety, and taking a lot of supplements. If there are plant foods that work for you, swapping them in may be barely a sacrifice. Also, athletes have a larger calorie budget to work with.
  • Lots of people switch to vegan diets and see immediate health improvements.
    • Some improve because veganism is genuinely their optimal diet.
    • Others improve because even though their hypothetical optimal diet includes meat, the omnivore diet they were actually eating was bad for them and removing meat entirely is easier than eating good forms in moderation.
    • Others improve because they are putting more effort into their vegan diet, and they would be doing even better if they put that much effort into their omnivore diet.
    • Others see short-term improvement because animal products have both good points and bad points, and for some people the bad parts decay faster than the good parts. If your cholesterol goes down in a month and your B12 takes years to become a problem, it is simultaneously true that going vegan produced an immediate improvement, and that it will take a health toll.
  • Vegetarianism is nutritionally much closer to omnivorism than it is to veganism.
  • There exist large clusters of vegans who do not talk about nutrition and are operating naively. As in, no research into nutrition, no supplements, no testing, no conscious thought applied to their diet.
    • One of these clusters is young effective altruists whose top priority is not animal welfare (but nonetheless feel compelled to go vegan). 

Those are my premises. Below are a few conclusions I draw from them.  I originally didn’t plan on including a conclusion, but an early reader suggested my conclusions were milder than they expected and it might be good to share them. So: 

  • People recruiting for veganism should take care to onboard people in a responsible way. This could be as simple as referring people to veganhealth.org frequently enough that they actually use it.
    • Recruiting means both organized efforts and informal encouragement of friends. 
  • Diet issues are a live hypothesis suggested to vegans with health problems, especially vague, diagnosis-resistant ones.
    • This one isn’t vegan specific, although I do think it’s more relevant to them.
  • False claims about vegan nutrition should be proactively rejected by the vegan community, in both formal and informal settings, including implicit claims. This includes:
    • Explicit or implicit claims veganism is healthy for everyone, and that there is no one for whom it is not healthy.
    • Explicit or implicit claims veganism doesn’t involve trade-offs for many people. 
    • Motte and baileys of “there is nothing magic about animal products, we can use technology to perfectly replace them” and “animal products have already been perfectly replaced and rendered unnecessary”.

My evidence

One is first principles. Animal products are incredibly nutrient dense. You can get a bit of all known nutrients from plants and fortified products, and you can find a vegan food that’s at least pretty good for every nutrient, but getting enough of all of them is a serious logic puzzle unless you have good genes. Short of medical issues it can be done, but for most people it will take some combination of more money, more planning, more work, and less joy from food. 

“Short of medical issues” is burying the lede. Food allergies and digestion issues mean lots of people struggle to feed themselves even with animal products; giving up a valuable chunk of their remaining options comes at a huge cost.

[Of course some people have issues such that animal products are bad for them and giving them up is an improvement. Those raise veganism’s average health score but don’t cancel out the people who would suffer]

More empirically, there is this study from Faunalytics, which found 29% of ex-vegans and ex-vegetarians in their sample had nutritional issues, and 80% got better within three months of quitting. Their recorded attrition rate was 84%, so if you assume no current veg*ns have issues that implies a 24% of all current and former veg*ns develop health issues from the diet (19% if you only include issues meat products cured quickly). I’m really sad to only be giving you this one study, but most of the literature is terrible (see below).

The Faunalytics study has a fair number of limitations, which I went into more detail on here. My guess is that their number is a moderate underestimate of the real rate, and a severe underestimate of the value for naive vegans in particular, but 24% is high enough that I don’t think the difference matters so I’ll use that for the rest of the post.

Evidence I’m looking for

The ideal study is a longitudinal RCT where diet is randomly assigned, cost (across all dimensions, not just money) is held constant, and participants are studied over multiple years to track cumulative effects. I assume that doesn’t exist, but the closer we can get the better. 

I’ve spent several hours looking for good studies on vegan nutrition, of which the only one that was even passable was the Faunalytics study. My search was by no means complete, but enough to spot some persistent flaws among multiple studies. I’ve also spent a fair amount of time checking citations made in support of vegan claims, only to find the study is either atrocious or doesn’t support the claim made (examples in the “This is a strawman…” section). There is also some history of goalpost moving, where an advocate cites a study, I criticize it, and they say it doesn’t matter and cite a new study. This is exhausting. 

I ask that you only cite evidence you, personally, find compelling and are willing to stand by, and note its flaws in your initial citation. That doesn’t mean the study has to be perfect, that’s impossible, but you should know the flaws and be ready to explain why you still believe the study. If your belief rests on many studies instead of just one (a perfectly reasonable, nee admirable, state), please cite all of them. I am going to be pretty hard on people who link to seriously flawed studies without disclosing the flaws, or who retract citations without updating their own beliefs.

A non-exhaustive list of common flaws:

  • Studies rarely control for supplements. I’m tentatively on board with supplements being enough to get people back to at least the health level they had as an omnivore, but you can’t know their effect with recording usage and examining the impact.
  • I’ve yet to see a study that controlled for effort and money put into diet. If vegans are equally healthy but are spending twice as much time and money on food, that’s important to know.
  • Diet is self-selected rather than assigned. People who try veganism and stick with it are disproportionately likely to find it easy.
    • I don’t expect to find a study randomly assigning a long term vegan diet, but I will apply a discount factor to account for that. 
  • Studies are snapshots rather than long-term, and so lose all of the information from people who tried veganism, found it too hard, and quit.
    • Finding a way around this is what earned Faunalytics my eternal gratitude.
  • Studies don’t mention including people with additional dietary challenges, which I think are a very big deal.
  • Veganism status is based on self-identification. Other studies show that self-identified vegans often eat enough meat to be nutritionally relevant.
  • Studies often combine veganism and vegetarianism, or only include vegetarians, but are cited as if they are about veganism alone. I think vegetarianism is nutritionally much closer to omnivorism than veganism, so this isn’t helpful.
  • All the usual problems: tiny samples, motivated researchers, bad statistics. 
  • Some studies monitor dietary intake levels rather than internal levels of nutrients (as measured by tests on blood or other fluids). There are two problems with this:
    • Since RDA levels run quite high relative to average need, this is unfairly hard on vegan diets. 
    • Nutrition labels aren’t always corrected for average bioavailability, and can’t be corrected for individual variation in digestion. Plant nutrients are on average less bioavailable (although I think there are broad exceptions, and certainly individuals vary on this), so that’s perhaps too easy on plant-based diets.
  • Most studies are done by motivated parties, and it’s too easy to manipulate those. I wouldn’t have trusted the Faunalytics study if it had come from a pro-meat source.

A non-exhaustive list of evidence I hope for:

  • Quantifying the costs (across all dimensions) of dietary changes, even if the study doesn’ control for them
  • AFAICT there is no large vegan culture- the closest is lacto-vegetarian with individuals choosing to aim higher, and cultures that can’t afford meat often. Evidence of cultures with true, lifelong veganism (excluding mother’s milk) would be very interesting.
  • Studies that in some way tracking people who quit veganism, such that it could detect health issues driving people to quit. 
  • What happens to health when a very poor community earns enough to have access to occasional meat?
  • What happens when people from a lacto-vegetarian or meat-sparse culture move to a meat-loving one?
  • Studies on the impact of vegan nutritional education- how much if any does it improve outcomes?
  • What happens to people who are forced to give up animal products suddenly, for non-ethical reasons? I’m thinking of things like Alpha-gal Syndrome creating an immune response to red meat, adult onset lactose intolerance, or moving to a country that deemphasizes meat.
  • Ditto for the reverse.
    • I’m especially interested in people with dietary difficulties.
  • Studies comparing veganism and vegetarianism, especially in the same person.

 Preemptive responses to counter-arguments

There are a few counter-arguments I’ve already gotten or expect to get shortly, so let me address them ahead of time. 

“You’re singling out veganism”

Multiple people have suggested it’s wrong for me to focus on veganism. If I build enough trust and rapport with them they will often admit that veganism obviously involves some trade-offs, if only because any dietary change has trade-offs, but they think I’m unfairly singling veganism out.

First off, I’ve been writing about nutrition under this name since 2014. Earlier, if you count the pseudonymous livejournal. I talk about non-vegan nutrition all the time. I wrote a short unrelated nutrition post while this one was in editing. I understand the mistake if you’re unfamiliar with my work, but I assure you this is not a hobby I picked up to annoy you.

It’s true that I am paying more attention to veganism than I am to, say, the trad carnivore idiots, even though I think that diet is worse. But veganism is where the people are, both near me and in the US as a whole. Dietary change is so synonymous with animal protection within Effective Altruism that the EAForum tag is a wholly-owned subsidiary of the animal suffering tag. At a young-EA-organizer conference I mentored at last year, something like half of attendees were vegan, and only a handful had no animal-protecting diet considerations. If keto gets anywhere near this kind of numbers I assure you I will say something.

“The costs of misinformation are small relative to the benefits of animals”

One possible argument for downplaying or dismissing the costs of veganism is that factory farming is so bad anything is justified in stopping it. I’m open to that argument in the abstract, but empirically I think this isn’t working and animals would be better off if people were given proper information. 

First, it’s not clear to me the costs of acknowledging vegan nutrition issues are that high. I’ve gotten a few dozen comments/emails/etc on my vegan nutrition project of the form “This inspired me to get tested, here are my supplements, here are my results”. No one has told me they’ve restarted consuming meat or even milk. It is possible people are less likely to volunteer diet changes, although I do note I’m not vegan.

But even if education causes many people to bounce off, the alternative may be worse. 

That Faunalytics study says 24% of people leave veg*nism due to health reasons. If you use really naive math, that suggests that ignoring nutrition issues would need to increase recruitment by 33%, just to break even.  But people who quit veganism due to health issues tend to do so with a vitriol not seen in people leaving for other reasons. I don’t have numbers for this, but r/exvegans is mostly people who left for health reasons (with a smattering of people compelled by parents), as are the ex-vegans angry enough to start blogs. Even if they don’t make a lifestyle out of it, people who feel harmed are less likely to retry veganism, and more likely to discourage their friends.

I made a toy model comparing the trade off of education (which may lead people to bounce off) vs. lack of education (which leads people to quit and discourage others). The result is very sensitive to assumptions, especially “how many counterfactual vegans do angry ex-vegans prevent?”. If you put the attrition rate as low as I do, education is clearly the best decision from an animal suffering perspective. If you put it higher it becomes very sensitive to other assumptions. It is fairly hard to make a slam-dunk case against nutritional awareness, but then, (points at years of nutrition blogging) I would say that.

“The human health gains are small relative to the harms to animals” 

I think this is a fair argument to make, and the answer comes down to complicated math. To their credit, vegan EAs have done an enormous amount of math on the exact numeric suffering of farmed animals. But honest accounting requires looking at the costs as well.

“The health costs don’t matter, no benefit justifies the horror of farming animals”

This is a fair argument for veganism. But it’s not grounds to declare the health costs to be zero.

It’s also not grounds to ignore nutrition within a plant-based diet. Even if veganism is healthy for everyone and no harder a switch than other diets, it is very normal for dietary changes to entail trade-offs and have some upfront costs.  The push to deny trade-offs and punish those who investigate them (see below) is hurting your own people. 

“This is a strawman, vegans already address nutrition” 

I fully acknowledge that there are a lot of resources on vegan nutrition, and that a lot of the outreach literature at least name-checks dietary planning. But I talk to a lot of people (primarily young EAs focused on non-animal projects) with stories like this one, of people going vegan as a group without remembering a single mention of B12 or iron. I would consider that a serious problem even if I couldn’t point to anything the movement was doing to cause it.

But I absolutely can point to things within the movement that create the problem. There are some outright lies, and a lot more well-crafted sentences that are technically correct but in aggregate leave people with deeply misleading impressions. 

While reading, please keep in mind that these are formal statements by respected vegans and animal protection organizationss (to the best of my ability to determine). All movements have idiots saying horrible things on reddit, and it’s not fair to judge the whole movement by them. But please keep that context in mind while reading: these were not off-the-cuff statements or quick tweets, but things a movement leader thought about and wrote down. 

  • There are numerous sources talking about the health benefits of veganism. Very few of them explicitly say “and this will definitely happen with no additional work from you, without any costs or trade-offs”, but some do, and many imply it.
    • Magnus Vindling, who has published 9 books and co-founded the Center for Reducing Suffering, says :”Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meatchicken meatfish meateggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beansnuts, and seeds. This is relevant both collectively, for the sake of not supporting industries that actively promote poor human nutrition in general, as well as individually, to maximize one’s own health so one can be more effectively altruistic.”
  • This Facebook post from Jacy Reese Anthis, saying vegan dogs and cats can be perfectly healthy. Jacy was a leader among animal EAs until he left for unrelated reasons in 2019. He cites two sources, one of which supports only a subset of his claims, and the other of which actively contradicts them.
      • Apologies for the tiny image, WordPress is awful. If you right-click>open in new tab it will load a larger version.
    • His first source does say veganism can work, in dogs, but says nothing about cats.
    • His second source cites one person who says her cat is fine on a vegan diet but she doesn’t tell vets about it. The veterinarians quoted say dogs can be vegetarian and even vegan with some work. The statement on cats is ambiguous: it might be condemning only vegan diets, or both vegan and vegetarian. It rules out even vegetarian diets for young or breeding animals.

      The piece ends with “When people tell me they want to feed [their pet] a vegan diet, I say, ‘Get a goat, get a rabbit”.
    • Normally I would consider a 7 year old Facebook off-limits, but Jacy has a blue check and spent years doing very aggressive vegan advocacy on other peoples’ walls, most of which he has since deleted, so I think this is fair game. 
  • There is a related problem of motte-and-baileying “one day we will be able to have no-trade-off vegan diets, thanks to emerging technologies” and “it’s currently possible with no trade offs right this second”, e.g.: “Repudiating what “obligate carnivore” means – Kindly, but stridently, we have to correct folks that obligate carnivore stems from observation, not a diet requirement. This outdated thinking ignores the fundamental understanding of biochemistry, nutrition, and metabolism, which has only developed since the initial carnivore classification.”
  • In Doing Good Better, EA leader Will MacAskill advocates for a vegan diet to alleviate animal suffering, without mentioning any trade-offs. In isolation I don’t think that would necessarily be the wrong choice; the book is clearly about moral philosophy and not a how-to guide. But it is pushing individuals to change their personal diet (as opposed to donating to vegan recruitment programs), so I think it should at least mention trade-offs.
    • Apologies for the tiny image, WordPress is awful. If you right-click>open in new tab it will load a larger version.
  • Animal-ethics.org name-checks “a balanced diet” but the vibe is strongly “veganism is extra health with no effort”:
    • “According to the Academy of Nutrition and Dietetics, a well-planned vegan diet is nutritionally adequate and appropriate for individuals during all stages of the life cycle, including pregnancy, lactation, infancy, childhood, and adolescence, and for athletes.1 Everyone should have a balanced diet to be healthy, not only vegans. In fact non-vegans may well have unbalanced diets which are not good for their health. In order to be healthy we don’t need to consume certain products, but certain nutrients. Vegans can ingest those nutrients without having to eat animal products.”
    • “Being vegan is easier than you may think. Finding vegan food and other alternative products and services that do not involve animal exploitation is increasingly easier. It is true that some people may experience a lack of support from their family or friends or may find it extra challenging to stop eating certain animal products. However, other people can help you with that, especially today, given that internet and social networks have made it possible to get information and help from many other people. It is important to identify the factors that may be hindering your transition to veganism and look for assistance and encouragement from other people.”
    • Do I need to consult a doctor or nutritionist before becoming vegan?
      While this can be useful, as in the case of a planned non-vegan diet, it is not necessary. A vegan diet is suitable for people of all ages and conditions. A vegan nutritionist may help plan custom menus to meet specific requirements – for instance, if you are an athlete or if you want to gain or lose a lot of weight as a vegan. It is always advised to consult a nutritionist regularly for a check-up. However, it is important to note that some nutritionists are biased and don’t know a lot about vegan nutrition. Note also that medical doctors are often not experts on nutrition.”
  • EA-Foundation says veganism requires “appropriate planning”, but that this is easy 
  • That Faunalytics vegan study, which I mostly loved, contains the following: “Former vegetarians/vegans were asked if they began to experience any of the following when they were eating a vegetarian/vegan diet: depression/anxiety, digestive problems, food allergies, low cholesterol, an eating disorder, thyroid problems, protein deficiency, B12 deficiency, calcium deficiency, iron deficiency, iodine deficiency, vitamin A deficiency, vitamin D deficiency, zinc deficiency. The findings show that: – 71% of former vegetarians/vegans experienced none of the above. It is quite noteworthy that such a small proportion of individuals experienced ill health.”
    • 29% isn’t small. You can argue that’s an overestimate, but they’re accepting the 29% number, and are saying it doesn’t matter. 

Why is this so hard to talk about?

This is probably the least important section. I’m including it mostly in the hope it lowers friction in the object-level conversation. 

The stakes are so high

Hardcore vegan advocates believe we are surrounded by mass torture and slaughter facilities killing thousands of beings every day. That’s the kind of crisis that makes it hard to do really nuanced math people may use to justify ignoring you. 

Vegans are sick of concern trolls

Vegans frequently have to deal with bad-faith interrogation of their choices (“wHxt ABuoT proTEIn?!?!”). I imagine this is infuriating, and I’ve worked really hard to set myself apart by things like investing hundreds of hours of my time, much of which was unpaid, and working to get vegans the nutrition they needed to stay healthy and vegan.

Typical minding/failure of imagination

People who find veganism easier are disproportionately likely to become and stay vegan. That’s what the word “easy” means. Then some of them assume their experiences are more representative than they are, and that people who report more difficulty are lying. 

E.g. this comment on an earlier post (not even by a vegan- he was a vegan’s partner) said “there is nothing special one needs to do to stay healthy [while eating vegan]” because “most processed products like oat milk, soy milk, impossible meat, beyond meat, daiya cheese are enriched with whatever supplements are needed”. Which I would describe as “all you need to do to stay healthy while vegan is eat fortified products”. That’s indeed pretty easy, and some people will do it without thinking. But it’s not nothing, especially when “no processed foods” is such a common restriction. Sure enough, Faunalytics found that veg*ns who quit were less likely (relative to current veg*ns) to eat fortified foods. 

That same person later left another comment, conceding this point and also that there were people the fortified foods didn’t work for. Which is great, but it belonged in the first comment.

Or this commenter, who couldn’t imagine a naive vegan until an ex-vegan described the total ignorance they and their entire college EA group operated under. 

Lies we tell omnivores

Ozy Brennan has a post “Lies to cis people”. They posit that trans advocates, faced with a hostile public, give a story of gender that is simplified (because most people won’t hear the nuance anyway), and prioritizes being treated well over conveying the most possible truth. The intention is that an actual trans person or deeply invested ally will go deeper into the culture and get a more nuanced view. This can lead to some conflict when a person tries to explore gender with only the official literature as their guide.

Similarly, “veganism requires no sacrifice on any front, for anyone” is a lie vegans tell current omnivores. I suspect the expectation, perhaps subconscious, is that once they convert to veganism they’ll hang around other vegans and pick up some recipes, know what tests to get, and hear recommendations for vegan vitamins without doing anything deliberately. The longer sentence would be “for most people veganism requires no sacrifice beyond occasional tests and vitamins, which is really not much work at all”. 

But this screws over new vegans who don’t end up in those subcultures. It’s especially bad if they’re surrounded by enough other vegans that it feels like they should get the knowledge, but the transmission was somehow cut off. I think this has happened with x-risk focused EA vegans, and two friends described a similar phenomenon in the straight-edge punk scene

Failure to hear distinctions, on both sides

I imagine many people do overestimate the sacrifice involved in becoming vegan. The tradeoff is often less than they think, especially once they get over the initial hump. If omnivores are literally unable to hear “well yes, but for most people only a bit”, it’s very tempting to tell them “not at all”. But this can lead even the average person to do less work than they should, and leaves vegans unable to recognize people for whom plant based diets are genuinely very difficult, if not impossible.

Conclusion

I think veganism comes with trade-offs, health is one of the axes, and that the health issues are often but not always solvable. This is orthogonal to the moral issue of animal suffering. If I’m right, animal EAs need to change their messaging around vegan diets, and start self-policing misinformation. If I’m wrong, I need to write some retractions and/or shut the hell up.

Discussions like this are really hard, and have gone poorly in the past. But I’m still hopeful, because animal EAs have exemplified some of the best parts of effective altruism, like taking weird ideas seriously, moral math, and checking to see if a program actually worked. I want that same epistemic rigor applied to nutrition, and I’m hopeful about what will happen if it is. 

Thanks to Patrick La Victoire and Raymond Arnold for long discussions and beta-reading, and Sam Cotrell for research assistance.

Epistemic Legibility

Tl;dr: being easy to argue with is a virtue, separate from being correct.

Introduction

Regular readers of my blog know of my epistemic spot check series, where I take claims (evidential or logical) from a work of nonfiction and check to see if they’re well supported. It’s not a total check of correctness: the goal is to rule out things that are obviously wrong/badly formed before investing much time in a work, and to build up my familiarity with its subject. 

Before I did epistemic spot checks, I defined an easy-to-read book as, roughly, imparting an understanding of its claims with as little work from me as possible. After epistemic spot checks, I started defining easy to read as “easy to epistemic spot check”. It should be as easy as possible (but no easier) to identify what claims are load-bearing to a work’s conclusions, and figure out how to check them. This is separate from correctness: things can be extremely legibly wrong. The difference is that when something is legibly wrong someone can tell you why, often quite simply. Illegible things just sit there at an unknown level of correctness, giving the audience no way to engage.

There will be more detailed examples later, but real quick: “The English GDP in 1700 was $890324890. I base this on $TECHNIQUE interpretation of tax records, as recorded in $REFERENCE” is very legible (although probably wrong, since I generated the number by banging on my keyboard). “Historically, England was rich” is not. “Historically, England was richer than France” is somewhere in-between. 

“It was easy to apply this blog post format I made up to this book” is not a good name, so I’ve taken to calling the collection of traits that make things easy to check “epistemic legibility”, in the James C. Scott sense of the word legible. Legible works are (comparatively) easy to understand, they require less external context, their explanations scale instead of needing to be tailored for each person. They’re easier to productively disagree with, easier to partially agree with instead of forcing a yes or no, and overall easier to integrate into your own models.

[Like everything in life, epistemic legibility is a spectrum, but I’ll talk about it mostly as a binary for readability’s sake]

When people talk about “legible” in the Scott sense they often mean it as a criticism, because pushing processes to be more legible cuts out illegible sources of value. One of the reasons I chose the term here is that I want to be very clear about the costs of legibility and the harms of demanding it in excess. But I also think epistemic legibility leads people to learn more correct things faster and is typically underprovided in discussion.

If I hear an epistemically legible argument, I have a lot of options. I can point out places I think the author missed data that impacts their conclusion, or made an illogical leap. I can notice when I know of evidence supporting their conclusions that they didn’t mention. I can see implications of their conclusions that they didn’t spell out. I can synthesize with other things I know, that the author didn’t include.

If I hear an illegible argument, I have very few options. Perhaps the best case scenario is that it unlocks something I already knew subconsciously but was unable to articulate, or needed permission to admit. This is a huge service! But if I disagree with the argument, or even just find it suspicious, my options are kind of crap. I write a response of equally low legibility, which is unlikely to improve understanding for anyone. Or I could write up a legible case for why I disagree, but that is much more work than responding to a legible original, and often more work than went into the argument I’m responding to, because it’s not obvious what I’m arguing against.  I need to argue against many more things to be considered comprehensive. If you believe Y because of X, I can debate X. If you believe Y because …:shrug:… I have to imagine every possible reason you could do so, counter all of them, and then still leave myself open to something I didn’t think of. Which is exhausting.

I could also ask questions, but the more legible an argument is, the easier it is to know what questions matter and the most productive way to ask them. 

I could walk away, and I am in fact much more likely to do that with an illegible argument. But that ends up creating a tax on legibility because it makes one easier to argue with, which is the opposite of what I want.

Not everything should be infinitely legible. But I do think more legibility would be good on most margins, that choices of the level of legibility should be made more deliberately, and that we should treat highly legible and illegible works more differently than we currently do. I’d also like a common understanding of legibility so that we can talk about its pluses and minuses, in general or for a particular piece.

This is pretty abstract and the details matter a lot, so I’d like to give some better examples of what I’m gesturing at. In order to reinforce the point that legibility and correctness are orthogonal; this will be a four quadrant model. 

True and Legible

Picking examples for this category was hard. No work is perfectly true and perfectly legible, in the sense of being absolutely impossible to draw an inaccurate conclusion from and having no possible improvements to legibility, because reality is very complicated and communication has space constraints. Every example I considered, I could see a reason someone might object to it. And the things that are great at legibility are often boring. But it needs an example so…

Acoup

Bret Devereaux over at Acoup consistently writes very interesting history essays that I found both easy to check and mostly true (although with some room for interpretation, and not everyone agrees). Additionally, a friend of mine who is into textiles tells me his textile posts were extremely accurate. So Devereaux does quite well on truth and legibility, despite bringing a fair amount of emotion and strong opinions to his work. 

As an example, here is a paragraph from a post arguing against descriptions of Sparta as a highly equal society.

But the final word on if we should consider the helots fully non-free is in their sanctity of person: they had none, at all, whatsoever. Every year, in autumn by ritual, the five Spartan magistrates known as the ephors (next week) declared war between Sparta and the helots – Sparta essentially declares war on part of itself – so that any spartiate might kill any helot without legal or religious repercussions (Plut. Lyc. 28.4; note also Hdt. 4.146.2). Isocrates – admittedly a decidedly anti-Spartan voice – notes that it was a religious, if not legal, infraction to kill slaves everywhere in Greece except Sparta (Isoc. 12.181). As a matter of Athenian law, killing a slave was still murder (the same is true in Roman law). One assumes these rules were often ignored by slave-holders of course – we know that many such laws in the American South were routinely flouted. Slavery is, after all, a brutal and inhuman institution by its very nature. The absence of any taboo – legal or religious – against the killing of helots marks the institution as uncommonly brutal not merely by Greek standards, but by world-historical standards.

Here we have some facts on the ground (Spartiates could kill their slaves, killing slaves was murder in most contemporaneous societies), sources for some but not all of them (those parentheticals are highly readable if you’re a classicist, and workable if you’re not), the inference he drew from them (Spartans treated their slaves unusually badly), and the conclusions he drew from that (Sparta was not only inequitable, it was unusually inequitable even for its time and place).

Notably, the entire post relies heavily on the belief that slavery is bad, which Devereaux does not bother to justify. That’s a good choice because it would be a complete waste of time for modern audiences – but it also makes this post completely unsuitable for arguing with anyone who disagreed. If for some reason you needed to debate the ethics of slavery, you need work that makes a legible case for that claim in particular, not work that takes it as an axiom.

Exercise for Mood and Anxiety

A few years ago I ESCed Exercise for Mood and Anxiety, a book that aims to educate people on how exercise can help their mental health and then give them the tools to do so. It did really well at the former: the logic was compelling and the foundational evidence was well cited and mostly true (although exercise science always has wide error bars). But out of 14 people who agreed to read the book and attempt to exercise more, only three reported back to me and none of them reported an increase in exercise. So EfMaA is true and epistemically legible, but nonetheless not very useful. 

True but Epistemically Illegible

You Have About Five Words is a poetic essay from Ray Arnold. The final ~paragraph is as follows:

If you want to coordinate thousands of people…

You have about five words.

This has ramifications on how complicated a coordinated effort you can attempt.

What if you need all that nuance and to coordinate thousands of people? What would it look like if the world was filled with complicated problems that required lots of people to solve?

I guess it’d look like this one.

I think the steelman of its core claim, that humans are bad at remembering long nuanced writing and the more people you are communicating with, the more you need to simplify your writing, is obviously true. This is good, because Ray isn’t doing crap to convince me of it. He cites no evidence and gives no explanation of his logic. If I thought nuance increased with the number of readers I would have nothing to say other than “no you’re wrong” or write my own post from scratch, because he gives no hooks to refute. If someone tried to argue that you get ten words rather than five, I would think they were missing the point. If I thought he had the direction right but got the magnitude of the effect wrong enough that it mattered (and he was a stranger rather than a friend), I would not know where to start the discussion.

[Ray gets a few cooperation points back by explicitly labeling this as poetry, which normally I would be extremely happy about, but it weakened its usefulness as an example for this post so right this second I’m annoyed about it.]

False but Epistemically Legible

Mindset

I think Carol Dweck’s Mindset and associated work is very wrong, and I can produce large volumes on specific points of disagreement. This is a sign of a work that is very epistemically legible: I know what her cruxes are, so I can say where I disagree. For all the shit I’ve talked about Carol Dweck over the years, I appreciate that she made it so extraordinarily easy to do so, because she was so clear on where her beliefs came from. 

For example, here’s a quote from Mindset

All children were told that they had performed well on this problem set: “Wow, you did very well on these problems. You got [number of problems] right. That’s a really high score!” No matter what their actual score, all children were told that they had solved at least 80% of the problems that they answered.

Some children were praised for their ability after the initial positive feedback: “You must be smart at these problems.” Some children were praised for their effort after the initial positive feedback: “You must have worked hard at these problems.” The remaining children were in the control condition and received no additional feedback.

And here’s Scott Alexander’s criticism

This is a nothing intervention, the tiniest ghost of an intervention. The experiment had previously involved all sorts of complicated directions and tasks, I get the impression they were in the lab for at least a half hour, and the experimental intervention is changing three short words in the middle of a sentence.

And what happened? The children in the intelligence praise condition were much more likely to say at the end of the experiment that they thought intelligence was more important than effort (p < 0.001) than the children in the effort condition. When given the choice, 67% of the effort-condition children chose to set challenging learning-oriented goals, compared to only 8% (!) of the intelligence-condition. After a further trial in which the children were rigged to fail, children in the effort condition were much more likely to attribute their failure to not trying hard enough, and those in the intelligence condition to not being smart enough (p < 0.001). Children in the intelligence condition were much less likely to persevere on a difficult task than children in the effort condition (3.2 vs. 4.5 minutes, p < 0.001), enjoyed the activity less (p < 0.001) and did worse on future non-impossible problem sets (p…you get the picture). This was repeated in a bunch of subsequent studies by the same team among white students, black students, Hispanic students…you probably still get the picture.

Scott could make those criticisms because Dweck described her experiment in detail. If she’d said “we encouraged some kids and discouraged others”, there would be a lot more ambiguity.

Meanwhile, I want to criticize her for lying to children. Messing up children’s feedback system creates the dependencies on adult authorities that lead to problems later in life. This is extremely bad even if it produces short-term improvements (which it doesn’t). But I can only do this with confidence because she specified the intervention.

The Fate of Rome

This one is more overconfident than false. The Fate of Rome laid out very clearly how they were using new tools for recovering meteorological data to determine the weather 2000 years ago, and using that to analyze the Roman empire. Using this new data, it concludes that the peak of Rome was at least partially caused by a prolonged period of unusually good farming weather in the Mediterranean, and that the collapse started or was worsened when the weather began to regress to the mean.

I looked into the archeometeorology techniques and determined that they, in my judgement, had wider confidence intervals than the book indicated, which undercut the causality claims. I wish the book had been more cautious with its evidence, but I really appreciate that they laid out their reasoning so clearly, which made it really easy to look up points I might disagree with them on.

False and Epistemically Illegible

Public Health and Airborne Pathogen Transmission

I don’t know exactly what the CDC’s or WHO’s current stance is on breathing-based transmission of covid, and I don’t care, because they were so wrong for so long in such illegible ways. 

When covid started, the CDC and WHO’s story was that it couldn’t be “airborne”, because the viral particle was > 5 microns.  That phrasing was already anti-legible for material aimed at the general public, because airborne has a noticeably different definition in virology (”can persist in the air indefinitely”) than it does for popular use (”I can catch this through breathing”). But worse than that, they never provided any justification for the claim. This was reasonable for posters, but not everything was so space constrained, and when I looked in February 2021 I could not figure out where the belief that airborne transmission was rare was coming from. Some researcher eventually spent dozens to hundreds of hours on this and determined the 5 micron number probably came from studies of tuberculosis, which for various reasons needs to get deeper in the lungs than most pathogens and thus has stronger size constraints. If the CDC had pointed to their sources from the start we could have determined the 5 micron limit was bullshit much more easily (the fact that many relevant people accepted it without that proof is a separate issue).

When I wrote up the Carol Dweck example, it was easy. I’m really confident in what Carol Dweck believed at the time of writing Mindset, so it’s really easy to describe why I disagree. Writing this section on the CDC was harder, because I cannot remember exactly what the CDC said and when they said it; a lot of the message lived in implications; their statements from early 2020 are now memory holed and while I’m sure I could find them on archive.org, it’s not really going to quiet the nagging fear that someone in the comments is going to pull up a different thing they said somewhere else that doesn’t say exactly what I claimed they said, or that I view as of a piece with what I cited but both statements are fuzzy enough that it would be a lot of work to explain why I think the differences are immaterial….

That fear and difficulty in describing someone’s beliefs is the hallmark of epistemic illegibility. The wider the confidence interval on what someone is claiming, the more work I have to do to question it.

And More…

The above was an unusually legible case of illegibility. Mostly illegible and false arguments don’t feel like that. They just feel frustrating and bad and like the other person is wrong but it’s too much work to demonstrate how. This is inconveniently similar to the feeling when the other person is right but you don’t want to admit it. I’m going to gesture some more at illegibility here, but it’s inherently an illegible concept so there will be genuinely legible (to someone) works that resemble these points, and illegible works that don’t.

Marks of probable illegibility:

  • The person counters every objection raised, but the counters aren’t logically consistent with each other. 
  • You can’t nail down exactly what the person actually believes. This doesn’t mean they’re uncertain – saying “I think this effect is somewhere between 0.1x and 10000x” is very legible, and sometimes the best you can do given the data. It’s more that they imply a narrow confidence band, but the value that band surrounds moves depending on the subargument. Or they agree they’re being vague but they move forward in the argument as if they were specific. 
  • You feel like you understand the argument and excitedly tell your friends. When they ask obvious questions you have no answer or explanation. 

A good example of illegibly bad arguments that are specifically trying to ape legibility are a certain subset of alt-medicine advertisements. They start out very specific, with things like “there are 9804538905 neurons in your brain carrying 38923098 neurotransmitters”, with rigorous citations demonstrating those numbers. Then they introduce their treatment in a way that very strongly implies it works with those 38923098 transmitters but not, like, what it does to them or why we would expect that to have a particular effect. Then they wrap it up with some vague claims about wellness, so you’re left with the feeling you’ll definitely feel better if you take their pill, but if you complain about any particular problem it did not fix they have plausible deniability.

[Unfortunately the FDA’s rules around labeling encourage this illegibility even for products that have good arguments and evidence for efficacy on specific problems, so the fact that a product does this isn’t conclusive evidence it’s useless.]

Bonus Example: Against The Grain

The concept of epistemic legibility was in large part inspired by my first attempt at James C. Scott’s Against the Grain (if that name seems familiar: Scott also coined “legibility” in the sense in which I am using it), whose thesis is that key properties of grains (as opposed to other domesticates) enabled early states. For complicated reasons I read more of AtG without epistemic checking than I usually would, and then checks were delayed indefinitely, and then covid hit, and then my freelancing business really took off… the point is, when I read Against the Grain in late 2019, it felt like it was going to be the easiest epistemic spot check I’d ever done. Scott was so cooperative in labeling his sources, claims, and logical conclusions. But when I finally sat down to check his work, I found serious illegibilities.

I did the spot check over Christmas this year (which required restarting the book). It was maybe 95% as good as I remembered, which is extremely high. At chapter 4 (which is halfway through the book, due to the preface and introduction), I felt kinda overloaded and started to spot check some claims (mostly factual – the logical ones all seemed to check out as I read them). A little resentfully, I checked this graph.

This should have been completely unnecessary, Scott is a decent writer and scientist who was not going to screw up basic dates. I even split the claims section of the draft into two sections, “Boring” and “Interesting”, because I obviously wasn’t going to come up with anything checking names and dates and I wanted that part to be easy to skip.

I worked from the bottom. At first, it was a little more useful than I expected – a major new interpretation of the data came out the same year the book was published, so Scott’s timing on anatomically modern humans was out of date, but not in a way that reflected poorly on him.

Finally I worked my way up to “first walled, territorial state”. Not thinking super hard, I googled “first walled city”, and got a date 3000 years before the one Scott cites. Not a big deal, he specified state, not walls. What I can google to find that out? “Earliest state”, obviously, and the first google hit does match Scott’s timing, but… what made something a state, and how can we assess those traits from archeological records? I checked, and nowhere in the preface, introduction, or first three chapters was “state” defined. No work can define every term it uses, but this is a pretty important one for a book whose full title is Against the Grain: A Deep History of the Earliest States

You might wonder if “state” had a widespread definition such that it didn’t need to be defined. I think this is not the case for a few reasons. First, Against The Grain is aimed at a mainstream audience, and that requires defining terms even if they’re commonly known by experts. Second, even if a reader knew the common definition of what made a state, how you determine whether something was a state or merely a city from archeology records is crucial for understanding the inner gears of the book’s thesis. Third, when Scott finally gives a definition, it’s not the same as the one on wikipedia.

[longer explanation] Among these characteristics, I propose to privilege those that point to territoriality and a specialized state apparatus: walls, tax collection, and officials.

Against the Grain

States are minimally defined by anthropologist David S. Sandeford as socially stratified and bureaucratically governed societies with at least four levels of settlement hierarchy (e.g., a large capital, cities, villages, and hamlets)

Wikipedia (as of 2021-12-26)

These aren’t incompatible, but they’re very far from isomorphic. I expect that even though there’s a fairly well accepted definition of state in the relevant field(s), there are disputed edges that matter very much for this exact discussion, in which Scott views himself as pushing back against the commonly accepted narrative. 

To be fair, the definition of state was not that relevant to chapters 1-3, which focus on pre-state farming. Unless, you know, your definition of “state” differs sufficiently from his. 

Against The Grain was indeed very legible in other ways, but loses basically all of its accrued legibility points and more for not making even a cursory definition of a crucial term in the introduction, and for doing an insufficient job halfway through the book.

This doesn’t mean the book is useless, but it does mean it was going to be more work to extract value from than I felt like putting in on this particular topic.

Why is this Important?

First of all, it’s costing me time.

I work really hard to believe true things and disbelieve false things, and people who argue illegibly make that harder, especially when people I respect treat arguments as more proven than their level of legibility allows them to be. I expect having a handle with which to say “no I don’t have a concise argument about why this work is wrong, and that’s a fact about the work” to be very useful.

More generally, I think there’s a range of acceptable legibility levels for a given goal, but we should react differently based on which legibility level the author chose, and that arguments will be more productive if everyone involved agrees on both the legibility level and on the proper response to a given legibility level. One rule I have is that it’s fine to declare something a butterfly idea and thus off limits to sharp criticism, but that inherently limits the calls to action you can make based on that idea. 

Eventually I hope people will develop some general consensus around the rights and responsibilities of a given level of legibility, and that this will make arguments easier and more productive. Establishing those rules is well beyond the scope of this post. 

Legibility vs Inferential Distance

You can’t explain everything to everyone all of the time. Some people are not going to have the background knowledge to understand a particular essay of yours. In cases like this, legibility is defined as “the reader walks away with the understanding that they didn’t understand your argument”. Illegibility in this case is when they erroneously think they understand your argument. In programming terms, it’s the difference between a failed function call returning a useful error message (legible), versus failing silently (illegible).  

A particularly dangerous way this can occur is when you’re using terms of art (meaning: words or phrases that have very specific meanings within a field) that are also common English words. You don’t want someone thinking you’re dismissing a medical miracle because you called it statistically insignificant, or invalidating the concept of thought work because it doesn’t apply force to move an object.

Cruelly, misunderstanding becomes more likely the more similar the technical definition is to the English definition. I watched a friend use the term “common knowledge” to mean “everyone knows that everyone knows, and everyone knows that everyone knows… and that metaknoweldge enables actions that wouldn’t be possible if it was merely true that everyone knew and thought they were the only one, and those additional possible actions are extremely relevant to our current conversation” to another friend who thought “common knowledge” meant “knowledge that is common”, and had I not intervened the ensuing conversation would have been useless at best.

Costs of Legibility

The obvious ones are time and mental effort, and those should not be discounted. Given a finite amount of time, legibility on one work trades off against another piece being produced at all, and that may be the wrong call.

A second is that legibility can make things really dry. Legibility often means precision, and precision is boring, especially relative to work optimized to be emotionally activating. 

Beyond that, legibility is not always desirable. For example, unilateral legibility in an adversarial environment makes you vulnerable, as you’re giving people the keys to the kingdom of “effective lies to tell you”. 

Lastly, premature epistemic legibility kills butterfly ideas, which are beautiful and precious and need to be defended until they can evolve combat skills.

How to be Legible

This could easily be multiple posts, I’m including a how-to section here more to help convey the concept of epistemic legibility than write a comprehensive guide to how to do it. The list is not a complete list, and items on it can be faked. I think a lot of legibility is downstream of something harder to describe. Nonetheless, here are a few ways to make yourself more legible, when that is your goal.

  • Make it clear what you actually believe.
    • Watch out for implicit quantitative estimates (“probably”, “a lot”, “not very much”) and make them explicit, even if you have a very wide confidence interval. The goals here are twofold: the first is to make your thought process explicit to you. The second is to avoid confusion – people can mean different things by “many”, and I’ve seen some very long arguments suddenly resolve when both sides gave actual numbers.
  • Make clear the evidence you are basing your beliefs on.
    • This need not mean “scientific fact” or “RCT”. It could be “I experienced this a bunch in my life” or “gut feeling” or “someone I really trust told me so”. Those are all valid reasons to believe things. You just need to label them.
  • Make that evidence easy to verify.
    • More accessible sources are better.
      • Try to avoid paywalls and $900 books with no digital versions.
      • If it’s a large work, use page numbers or timestamps to the specific claim, removing the burden to read an entire book to check your work (but if your claim rests on a large part of the work, better to say that than artificially constrict your evidence)
    • One difficulty is when the evidence is in a pattern, and no one has rigorously collated the data that would let you demonstrate it. You can gather the data yourself, but if it takes a lot of time it may not be worth it. 
    • In times past, when I wanted to refer to a belief I had in a blog post but didn’t have a citation for it, I would google the belief and link to the first article that came up. I regret this. Just because an article agrees with me doesn’t mean it’s good, or that its reasoning is my reasoning. So one, I might be passing on a bad argument. Two, I know that, so if someone discredits the linked article it doesn’t necessarily change my mind, or even create in me a feeling of obligation to investigate. I now view it as more honest to say “I believe this but only vaguely remember the reasons why”, and if it ends up being a point of contention I can hash it out later.
  • Make clear the logical steps between the evidence and your final conclusion.
  • Use examples. Like, so many more examples than you think. Almost everything could benefit from more examples, especially if you make it clear when they’re skippable so people who have grokked the concept can move on.
    • It’s helpful to make clear when an example is evidence vs when it’s a clarification of your beliefs. The difference is if you’d change your mind if the point was proven false: if yes, it’s evidence. If you’d say “okay fine, but there are a million other cases where the principle holds”, it’s an example.  One of the mistakes I made with early epistemic spot checks was putting too much emphasis on disproving examples that weren’t actually evidence.
  • Decide on an audience and tailor your vocabulary to them. 
    • All fields have words that mean something different in the field than in general conversation, like “work”, “airborne”, and “significant”. If you’re writing within the field, using those terms helps with legibility by conveying a specific idea very quickly. If you’re communicating outside the field, using such terms without definition hinders legibility, as laypeople misapply their general knowledge of the English language to your term of art and predictably get it wrong. You can help on the margins by defining the term in your text, but I consider some uses of this iffy.
      • The closer the technical definition of a term is to its common usage, the more likely this is to be a problem because it makes it much easier for the reader to think they understand your meaning when they don’t.
    • At first I wanted to yell at people who use terms of art in work aimed at the general population, but sometimes it’s unintentional, and sometimes it’s a domain expert who’s bad at public speaking and has been unexpectedly thrust onto a larger stage, and we could use more of the latter, so I don’t want to punish people too much here. But if you’re, say, a journalist who writes a general populace book but uses an academic term of art in a way that will predictably be misinterpreted, you have no such excuse and will go to legibility jail. 
    • A skill really good interviewers bring to the table is recognizing terms of art that are liable to confuse people and prompting domain experts to explain them.
  • Write things down, or at least write down your sources. I realize this is partially generational and Gen Z is more likely to find audio/video more accessible than written work, and accessibility is part of legibility. But if you’re relying on a large evidence base it’s very disruptive to include it in audio and very illegible to leave it out entirely, so write it down.
  • Follow all the rules of normal readability – grammar, paragraph breaks, no run-on sentences, etc.

A related but distinct skill is making your own thought process legible. John Wentworth describes that here.

Synthesis

“This isn’t very epistemically legible to me” is a valid description (when true), and a valid reason not to engage. It is not automatically a criticism.

“This idea is in its butterfly stage”, “I’m prioritizing other virtues” or “this wasn’t aimed at you” are all valid defenses against accusations of illegibility as a criticism (when true), but do not render the idea more legible.

“This call to action isn’t sufficiently epistemically legible to the people it’s aimed at” is an extremely valid criticism (when true), and we should be making it more often.

I apologize to Carol Dweck for 70% of the vigor of my criticism of her work; she deserves more credit than I gave her for making it so easy to do that. I still think she’s wrong, though.

Epilogue: Developing a Standard for Legibility

As mentioned above, I think the major value add from the concept of legibility is that it lets us talk about whether a given work is sufficiently legible for its goal. To do this, we need to have some common standards for how much legibility a given goal demands. My thoughts on this are much less developed and by definition common standards need to be developed by the community that holds them, not imposed by a random blogger, so I’ll save my ideas for a different post. 

Epilogue 2: Epistemic Cooperation

Epistemic legibility is part of a broader set of skills/traits I want to call epistemic cooperation. Unfortunately, legibility is the only one I have a really firm handle on right now (to the point I originally conflated the concepts, until a few conversations highlighted the distinction- thanks friends!). I think epistemic cooperation, in the sense of “makes it easy for us to work together to figure out the truth” is a useful concept in its own right, and hope to write more about it as I get additional handles. In the meantime, there are a few things I want to highlight as increasing or signalling cooperation in general but not legibility in particular:

  • Highlight ways your evidence is weak, related things you don’t believe, etc.
  • Volunteer biases you might have.
  • Provide reasons people might disagree with you.
  • Don’t emotionally charge an argument beyond what’s inherent in the topic, but don’t suppress emotion below what’s inherent in the topic either.
  • Don’t tie up brain space with data that doesn’t matter.

Thanks to Ray Arnold, John Salvatier, John Wentworth, and Matthew Graves for discussion on this post. 

Butterfly Ideas

Or “How I got my hyperanalytical friends to chill out and vibe on ideas for 5 minutes before testing them to destruction”

Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation. It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are.

You know who you are

When I’m stuck in a conversation like that, it has been really helpful to explicitly label things as butterfly ideas. This has two purposes. First, it’s a shorthand for labeling what I want (nurturance and encouragement). Second, it explicitly labels the idea as not ready for prime time in ways that make it less threatening to my friends. They can support the exploration of my idea without worrying that support of exploration conveys agreement, or agreement conveys a commitment to act.

This is important because very few ideas start out ready for the rigors of combat. If they’re not given a sheltered period, they will die before they become useful. This cuts us off from a lot of goodness in the world. Examples:

  • A start-up I used to work for had a keyword that meant “I have a vague worried feeling I want to discuss without justifying”. This let people bring up concerns before they had an ironclad case for them and made statements that could otherwise have felt like intense criticism feel more like information sharing (they’re not asserting this will definitely fail, they’re asserting they have a feeling that might lead to some questions). This in turn meant that problems got brought up and addressed earlier, including problems in the classes “this is definitely gonna fail and we need to make major changes” and  “this excellent idea but Bob is missing the information that would help him understand why”.
    • This keyword was “FUD (fear, uncertainty, doubt)”. It is used in exactly the opposite way in cryptocurrency circles, where it means “you are trying to increase our anxiety with unfounded concerns, and that’s bad”. Words are tricky.
  • Power Buys You Distance From The Crime started out as a much less defensible seed of an idea with a much worse explanation. I know that had I talked about it in public it would have caused a bunch of unproductive yelling that made it harder to think because I did and it did (but later, when it was ready, intellectual combat with John Wentworth improved the idea further).
  • The entire genre of “Here’s a cool new emotional tool I’m exploring”
  • The entire genre of “I’m having a feeling about a thing and I don’t know why yet”

I’ve been on the butterfly crushing end of this myself- I’m thinking of a particular case last year where my friend brought up an idea that, if true, would require costly action on my part. I started arguing with the idea, they snapped at me to stop ruining their dreams. I chilled out, we had a long discussion about their goals, how they interpreted some evidence, and why they thought a particular action might further said goals, etc. 

A week later all of my objections to the specific idea were substantiated and we agreed not to do the thing- but thanks to the conversation we had in the meantime, I have a better understanding of them and what kinds of things would be appealing to them in the future. That was really valuable to me and I wouldn’t have learned all that if I’d crushed the butterfly in the beginning.

Notably, checking out that idea was fairly expensive, and only worth it because this was an extremely close friend (which both made the knowledge of them more valuable, and increased the payoff to helping them if they’d been right). If they had been any less close, I would have said “good luck with that” and gone about my day, and that would have been a perfectly virtuous reaction. 

I almost never discuss butterfly ideas on the public internet, or even 1:many channels. Even when people don’t actively antagonize them, the environment of Facebook or even large group chats means that people often read with half their brain and respond to a simplified version of what I said. For a class of ideas that live and die by context and nuance and pre-verbal intuitions, this is crushing. So what I write in public ends up being on the very defensible end of the things I think. This is a little bit of a shame, because the returns to finding new friends to study your particular butterflies with is so high, but ce la vie. 

This can play out a few ways in practice. Sometimes someone will say “this is a butterfly idea” before they start talking. Sometimes when someone is being inappropriately aggressive towards an idea the other person will snap “will you please stop crushing my butterflies!” and the other will get it. Sometimes someone will overstep, read the other’s facial expression, and say “oh, that was a butterfly, wasn’t it?”. All of these are marked improvements over what came before, and have led to more productive discussions with less emotional pain on both sides.