Grant Making and Grand Narratives

Another inside baseball EA post

The Lightspeed application asks:  “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”

LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”). 

I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail.  But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I  believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.

I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).

I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.

My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start. 
 

More on the costs of the question

Pushes away the most motivated people

Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad. 

[*Although there are also downsides to organizers with sufficiently bad epistemics.]

Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
 

Vulnerable to grift

You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems. 

 

Punishes underconfidence

Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated. 

Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
 

Corrupts epistemics

Not that much. But I think it’s pretty bad if people are forced to choose between “play the game of exaggerating impact” and “go unfunded”. Even if the game is in fact learnable, it’s a bad use of their time and weakens the barriers to lying in the future. 

Pushes projects to grow beyond their ideal scope

Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.  

This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
 

Rewards cultural knowledge independent of merit

There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals. 

Brainstorming fixes

I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of. 
 

  • Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption. 
  • Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
    • E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
    • This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
    • I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope. 
    • Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed. 
  • Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
    • My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
  • Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
    • This is tricky because grants don’t necessarily mean a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable. 
    • E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism. 
  • Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
  • When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are.  You can have an optional question asking for the effects of meeting the goal.
    • I like the word “effects” more than “impact”, which is a pretty loaded term within EA and x-risk. 
  • A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that. 
  • Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with. 
  • [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
  • [Your idea here]



 

I hope the forms change to explicitly encourage things like the above list, but  I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).   

Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I’m also very interested in why you like the current forms, and what constraints shaped them.

Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am. 

EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem

Introduction

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its reference category, and unjustified in that it is far from meeting its own standards. We’ve already seen dire consequences of the inability to detect bad actors who deflect investigation into potential problems, but by its nature you can never be sure you’ve found all the damage done by epistemic obfuscation because the point is to be self-cloaking. 

My concern here is for the underlying dynamics of  EA’s weak epistemic immune system, not any one instance. But we can’t analyze the problem without real examples, so individual instances need to be talked about. Worse, the examples that are easiest to understand are almost by definition the smallest problems, which makes any scapegoating extra unfair. So don’t.

This post focuses on a single example: vegan advocacy, especially around nutrition. I believe vegan advocacy as a cause has both actively lied and raised the cost for truthseeking, because they were afraid of the consequences of honest investigations. Occasionally there’s a consciously bad actor I can just point to, but mostly this is an emergent phenomenon from people who mean well, and have done good work in other areas. That’s why scapegoating won’t solve the problem: we need something systemic. 

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

Definitions

I picked the words “vegan advocacy” really specifically. “Vegan” sometimes refers to advocacy and sometimes to just a plant-exclusive diet, so I added “advocacy” to make it clear.

I chose “advocacy” over “advocates” for most statements because this is a problem with the system. Some vegan advocates are net truthseeking and I hate to impugn them. Others would like to be epistemically virtuous but end up doing harm due to being embedded in an epistemically uncooperative system. Very few people are sitting on a throne of plant-based imitation skulls twirling their mustache thinking about how they’ll fuck up the epistemic commons today. 

When I call for actions I say “advocates” and not “advocacy” because actions are taken by people, even if none of them bear much individual responsibility for the problem. 

I specify “EA vegan advocacy” and not just “vegan advocacy” not because I think mainstream vegan advocacy is better, but because 1. I don’t have time to go after every wrong advocacy group in the world. 2. Advocates within Effective Altruism opted into a higher standard. EA has a right and responsibility to maintain the standards of truth it advocates, even if the rest of the world is too far gone to worry about. 

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

How EA vegan advocacy has hindered truthseeking

EA vegan advocacy has both pushed falsehoods and punished people for investigating questions it doesn’t like. It manages this even for positions that 90%+ of effective altruism and the rest of the world agree with, like “veganism is a constraint”. I don’t believe its arguments convince anyone directly, but end up having a big impact by making inconvenient beliefs too costly to discuss. This means new entrants to EA are denied half of the argument, and harm themselves due to ignorance.

This section outlines the techniques I’m best able to name and demonstrate. For each technique I’ve included examples. Comments on my own posts are heavily overrepresented because they’re the easiest to find; “go searching through posts on veganism to find the worst examples” didn’t feel like good practice. I did my best to quote and summarize accurately, although I made no attempt to use a representative sample. I think this is fair because a lot of the problem lies in the fact that good comments don’t cancel out bad, especially when the good comments are made in parallel rather than directly arguing with the bad. I’ve linked to the source of every quote and screen shot, so you can (and should) decide for yourself. I’ve also created a list of all of my own posts I’m drawing from, so you can get a holistic view. 

My posts:

I should note I quote some commenters and even a few individual comments in more than one section, because they exhibit more than one problem. But if I refer to the same comment multiple times in a row I usually only link to it once, to avoid implying more sources than I have. 

My posts were posted on my blog, LessWrong, and EAForum. In practice the comments I drew from came from LessWrong (white background) and EAForum (black background).  I tried to go through those posts and remove all my votes on comments (except the automatic vote for my own comments) so that you could get an honest view of how the community voted without my thumb on the scale, but I’ve probably missed some, especially on older posts. On the main posts, which received a lot of traffic, I stuck to well-upvoted comments, but I included some low (but still positive) karma comments from unpopular posts. 

The goal here is to make these anti-truthseeking techniques legible for discussion, not develop complicated ways to say “I don’t like this”, so when available I’ve included counter examples. These are comments that look similar to the ones I’m complaining about, but are fine or at least not suffering from the particular flaw in that section. In doing this I hope to keep the techniques’ definitions narrow.

Active suppression of inconvenient questions

A small but loud subset of vegan advocacy will say outright you shouldn’t say true things, because it leads to outcomes they dislike. This accusation is even harsher than “not truthseeking”, and would normally be very hard to prove. If I say “you’re saying that because you care more about creating vegans than the health of those you create”, and they say “no I’m not”, I don’t really have a come back. I can demonstrate that they’re wrong, but not their motivation. Luckily, a few people said the quiet part out loud. 

Commenter Martin Soto pushed back very hard on my first nutrition testing study. Finally I asked him outright if he thought it was okay to share true information about vegan nutrition. His response was quite thoughtful and long, so you should really go read the whole thing, but let me share two quotes

He goes on to say:

And in a later comment

EDIT 2023-10-03: Martin disputes my summary of his comments. I think it’s good practice to link to disputes like this, even though I stand by my summary. I also want to give a heads-up that I see his comments in the dispute thread as continuing the patterns I describe (which makes that thread a tax on the reader). If you want to dig into this, I strongly suggest you first read his original comments and come up with your own summary, so you can compare that to each of ours.

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition. There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”. Most people flinch away from explicit trade-offs like that, and I appreciate that he did them and own the conclusion. But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

EDIT 2023-10-04: . Using Faunalytics numbers for self-reported health issues and improvements after quitting veg*nism, I calculated that 20% of veg*ns develop health issues. This number is sensitive to your assumptions; I consider 20% conservative but it could be an overestimate. I encourage you to read the whole post and play with my model, and of course read the original work.  

Most people aren’t nearly this upfront. They will go through the motions of calling an idea incorrect before emphasizing how it will lead to outcomes they dislike. But the net effect is a suppression of the exploration of ideas they find inconvenient. 

This post on Facebook is a good example. Normally I would consider facebook posts out of bounds, especially ones this old (over five years). Facebook is a casual space and I want people to be able to explore ideas without being worried that they’re creating a permanent record that will be used against them.  In this case I felt that because the post was permissioned to public and a considered statement (rather than an off the cuff reply), the truth value outweighed the chilling effect. But because it’s so old and I don’t know how the author’s current opinion, I’m leaving out their name and not linking to the post. 

The author is midlist EA- I’d heard of them for other reasons, but they’re certainly not EA-famous. 

There are posts very similar to this one I would have been fine with, maybe even joyful about. You could present evidence against the claims that X is harmful, or push people to verify things before repeating them, or suggest we reserve the word poison for actual kill-you-dead molecules and not complicated compound constructions with many good parts and only weak evidence of mild long-term negative effects. But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you’d expect if the health claims were true but inconvenient. I don’t know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

A subtler version comes from the AHS-2 post. At the time of this comment the author, Rockwell, described herself as the leader of EA NYC and an advisor to philanthropists on animal suffering, so this isn’t some rando having a feeling. This person has some authority.

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient. And if they’d written that counter-argument they promised I’d be putting this in the counter-examples section. But it’s been three months and they have not written anything where I can find it, nor responded to my inquiries. So even if literal claim were correct, she’s using a technique whose efficacy is independent of truth. 

Over on the Change My Mind post the top comment says that vegan advocacy is fine because it’s no worse than fast food or breakfast cereal ads

I’m surprised someone would make this comment. But what really shocks me is the complete lack of pushback from other vegan advocates. If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them. 

Counter-Examples

This post on EAForum came out while I was finishing this post. The author asks if they should abstain from giving bad reviews to vegan restaurants, because it might lead to more animal consumption- which would be a central example of my complaint. But the comments are overwhelmingly “no, there’s not even a good consequentialist argument for that”, and the author appears to be taking that to heart. So from my perspective this is a success story.

Ignore the arguments people are actually making

I’ve experienced this pattern way too often.

Me: goes out of my way to say not-X in a post
Comment: how dare you say X! X is so wrong!
Me: here’s where I explicitly say not-X.
*crickets*

This is by no means unique to posts about veganism. “They’re yelling at me for an argument I didn’t make” is a common complaint of mine. But it happens so often, and so explicitly, in the vegan nutrition posts. Let me give some examples.

My post:

Commenter:

My post:

Commenters:

My post:

Commenter: 

My post: 

Commenter:

My post:

Commenter:

You might be thinking “well those posts were very long and honestly kind of boring, it would be unreasonable to expect people to read everything”. But the length and precision are themselves a response to people arguing with positions I don’t hold (and failing to update when I clarify). The only things I can do are spell out all of my beliefs or not spell out all of my beliefs, and either way ends with comments arguing against views I don’t have. 

Frame control/strong implications not defended/fuzziness

This is the hardest one to describe. Sometimes people say things, and I disagree, and we can hope to clarify that disagreement. But sometimes people say things and responding is like nailing jello to a wall. Their claims aren’t explicit, or they’re individually explicit but aren’t internally consistent, or play games with definitions. They “counter” statements in ways that might score a point in debate club but don’t address the actual concern in context. 

One example is the top-voted comment on LW on Change My Mind

Over a very long exchange I attempt to nail down his position: 

  • Does he think micronutrient deficiencies don’t exist? No, he agrees they do.
  • Does he think that they can’t cause health issues? No, he agrees they do.
  • Does he think this just doesn’t happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

So what exactly does he disagree with me on? 

He also had a very interesting exchange with another commenter. That thread got quite long, and fuzziness by its nature doesn’t lend itself to excerpts, so you should read the whole thing, but I will share highlights. 

Before the screenshot: Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it’s fine because if people get tired they can go to a doctor.

That reply doesn’t contain any false statements, and would be perfectly reasonable if we were talking about ER triage protocols. But it’s irrelevant when the conversation is “can we count on veganism-induced fatigue being caught?”. (The answer is no, and only some of the reasons have been brought up here)

You can see how the rest of this conversation worked out in the Sound and Fury section.

A much, much milder example can be seen in What vegan food resources have you found useful?. This was my attempt to create something uncontroversially useful, and I’d call it a modest success. The post had 20-something karma on LW and EAForum, and there were several useful-looking resources shared on EAForum. But it also got the following comment on LW: 

I picked this example because it only takes a little bit of thought to see the jujitsu, so little it barely counts. He disagreed with my implicit claim that… well okay here’s the problem. I’m still not quite sure where he disagrees. Does he think everyone automatically eats well as a vegan? That no one will benefit from resources like veganhealth.org? That no one will benefit from a cheat sheet for vegan party spreads? That there is no one for whom veganism is challenging? He can’t mean that last one because he acknowledges exceptions in his later comment, but only because I pushed back. Maybe he thinks that the only vegans who don’t follow his steps are those with medical issues, and that no-processed-food diets are too unpopular to consider? 

I don’t think this was deliberately anti-truthseeking, because if it was he would have stopped at “nothing special” instead of immediately outlining the special things his partner does. That was fairly epistemically cooperative. But it is still an example of strong claims made only implicitly. 

Counter-Examples

I think this comment makes a claim (“vegans moving to naive omnivorism will hurt themselves”) clearly, and backs it up with a lot of details.

The tone is kind of obnoxious and he’s arguing with something I never claimed, but his beliefs are quite clear. I can immediately understand which beliefs of his I agree with (“vegans moving to naive omnivorism will hurt themselves” and “that would be bad”) and make good guesses at implicit claims I disagree with (“and therefore we should let people hurt themselves with naive veganism”? “I [Elizabeth] wouldn’t treat naive mass conversion to omnivorism seriously as a problem”?). That’s enough to count as epistemically cooperative.

Sound and fury, signifying no substantial disagreement 

Sometimes someone comments with an intense, strongly worded, perhaps actively hostile, disagreement. After a laborious back and forth, the problem dissolves: they acknowledge I never held the position they were arguing with, or they don’t actually disagree with my specific claims. 

Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.

The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.

I do feel like vegan advocates are entitled to a certain amount of defensiveness. They encounter large amounts of concern trolling and outright hostility, and it makes sense that that colors their interactions. But that allowance covers one comment, maybe two, not three to eight (Wilkox, depending on which ones you count). 

For example, I’ve already quoted Wilkox’s very fuzzy comment (reminder: this was the top voted comment on that post on LW). That was followed by a 13+ comment exchange in which we eventually found he had little disagreement with any of my claims about vegan nutrition, only the importance of these facts. There really isn’t a way for me to screenshot this: the length and lack of specifics is the point.

You could say that the confusion stemmed from poor writing on my part, but:

I really appreciate the meta-honesty here, but since the exchange appears to have eaten hours of both of our time just to dig ourselves out of a hole, I can’t get that excited about it. 

Counter-Examples

I want to explicitly note that Sound and Fury isn’t the same as asking questions or not understanding a post. E.g. here Ben West identifies a confusion, asks me, and accepts both my answer and an explanation of why answering is difficult. 

Or in that same post, someone asked me to define nutritionally dense. It took a bit for me to answer and we still disagreed afterward, but it was a great question and the exchange felt highly truthseeking.  

Bad sources, badly handled 

Citations should be something of a bet: if the citation (the source itself or your summary of it) is high quality and supports your point, that should move people closer to your views. But if they identify serious relevant flaws, that should move both you and your audience closer to their point of view. Of course our beliefs are based on a lot of sources and it’s not feasible or desirable to really dig into all of them for every disagreement, so the bet may be very small. But if you’re not willing to defend a citation, you shouldn’t make it.

What I see in EA vegan advocacy is deeply terrible citations, thrown out casually, and abandoned when inconvenient. I’ve made something of a name for myself checking citations and otherwise investigating factual claims from works of nonfiction. Of everything I’ve investigated, I think citations from EA vegan advocacy have the worst effort:truth ratio. Not outright more falsehoods, I read some pretty woo stuff, but those can be dismissed quickly. Citations in vegan advocacy are often revealed to be terrible only after great effort.

And having put in that effort, my reward is usually either crickets, or a new terrible citation. Sometimes we will eventually drill into “I just believe it”, which is honestly fine. We don’t live our lives to the standard of academic papers. But if that’s your reason, you need to state it from the beginning. 

For example, in the top voted comment on Change My Mind post on EAF,  Rockwell (head of EA NYC) has five links in her post. Only links 1 and 4 are problems, but I’ll describe them in order to avoid confusion.

Of the five links: 

  1. Wilkox’s comment on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn’t happened at the time of linking).
  2. cites my past work, if anything too generously.
  3. an estimation of nutrient deficiency in the US. I don’t love that this uses dietary intake as opposed to testing values (people’s needs vary so wildly), but at least it used EAR and not RDA. I’d want more from a post but for a comment this is fine.
  4. an absolutely atrocious article, which the comment further misrepresents. We don’t have time to get all the flaws in that article, so I’ve put my first hour of criticisms in the appendix. What really gets me here is that I would have agreed the standard American diet sucks without asking for a source. I thought I had conceded that point preemptively, albeit not naming Standard American Diet explicitly.

    And if she did feel a need go the extra mile on rigor for this comment, it’s really not that hard to find decent-looking research about the harms of the Standard Shitty American Diet.  I found this paper on heart disease in 30 seconds, and most of that time was spent waiting for Elicit to load. I don’t know if it’s actually good, but it is not so obviously farcical as the cited paper.
  5. The fifth link goes to a description of the Standard American Diet. 

Rockwell did not respond to my initial reply (that fixing vegan issues is easier than fixing SSAD), or my asking if that paper on the risks of meat eating was her favorite.

A much more time-consuming version of this happened with Adventist Health Study-2. Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism). There’s one commenter on LessWrong and two on EAForum (one of whom had previously co-authored a blog post on the study and offered to answer questions). As I discussed here, that study is one of the best we have on nutrition and I’m very glad people brought it to my attention. But calling it a pseudo-RCT that supports veganism is deeply misleading. It is nowhere near randomized, and doesn’t cleanly support veganism even if you pretend it is.

(EDIT 2023-10-03: To be clear, the noise in the study overwhelms most differences in outcomes, even ignoring the self-sorting. My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported. One commenter has said she only meant it as evidence that a vegan diet can work for some people, which I agree with, as stated in the post she was responding to. She disagrees with other parts of my summary as well, you can read more here)

It’s been three months, and none of the recommenders have responded to my analysis of the main AHS-2 paper, despite repeated requests. 

But finding a paper is of lower quality and supports an entirely different conclusion is still not the worst-case scenario. The worst outcome is citation whack-a-mole.

A good example of this is from the post “Getting Cats Vegan is Possible and Imperative”, by Karthik Sekar. Karthik is a vegan author and data scientist at a plant-based meat company. 

[Note that I didn’t zero out my votes on this post’s comments, because it seemed less important for posts I didn’t write]

Karthik cites a lot of sources in that post. I picked what looked like his strongest source and investigated. It was terrible. It was a review article, so checking it required reading multiple studies. Of the cited studies, only 4  (with a total of 39 combined subjects) use blood tests rather than owner reports, and more than half of those were given vegetarian diets, not vegan (even though the table header says vegan). The only RCT didn’t include carnivorous diets. 

Karthik agrees that paper (that he cited) is not making its case “strong nor clear”, and cites another one (which AFAICT was not in the original post).

I dismiss the new citation on the basis of “motivated [study] population and minimal reporting”. 

He retreats to “[My] argument isn’t solely based on the survey data. It’s supported by fundamentals of biochemistry, metabolism, and digestion too […] Mammals such as cats will digest food matter into constituent molecules. Those molecules are chemically converted to other molecules–collectively, metabolism–, and energy and biomass (muscles, bones) are built from those precursors. For cats to truly be obligate carnivores, there would have to be something exceptional about meat: (A) There would have to be essential molecules–nutrients–that cannot be sourced anywhere else OR (B) the meat would have to be digestible in a way that’s not possible with plant matter. […So any plant-based food that passes AAFCO guidelines is nutritionally complete for cats. Ami does, for example.]

I point out that AAFCO doesn’t think meeting their guidelines is necessarily sufficient. I expected him to dismiss this as corporate ass-covering, and there’s a good chance he’d be right. But he didn’t.

Finally, he gets to his real position:

Which would have been a fine aspirational statement, but then why include so many papers he wasn’t willing to stand behind? 

On that same post someone else says that they think my concerns are a big deal, and Karthik probably can’t convince them without convincing me. Karthik responds:

So he’s conceded that his study didn’t show what he claimed. And he’s not really defending the AAFCO standards. But he’s really sure this will work anyway? And I’m the one who won’t update their beliefs. 

In a different comment the same someone else notes a weird incongruity in the paper. Karthik doesn’t respond.

This is the real risk of the bad sources: hours of deep intellectual work to discover that his argument boils down to a theoretical claim the author could have stated at the beginning. “I believe vegan cat food meets these numbers and meeting these numbers is sufficient” honestly  isn’t a terrible argument, and I’d have respected it plainly stated, especially since he explicitly calls for RCTs. Or I would, if he didn’t view those RCTs primarily as a means to prove what he already knows.  

Counter-Examples

This commenter starts out pretty similarly to the others, with a very limited paper implied to have very big implications. But when I push back on the serious limitations of the study, he owns the issues and says he only ever meant the paper to support a more modest claim (while still believing the big claim he did make?). 

Taxing Facebook

When I joined EA Facebook in 2014, it was absolutely hopping. Every week I met new people and had great discussions with them where we both walked away smarter. I’m unclear when this trailed off because I was drifting away from EA at the same time, but let’s say the golden age was definitively over by 2018. Facebook was where I first noticed the pattern with EA vegan advocacy. 

Back in 2014 or 2015, Seattle EA watched some horrifying factory farming documentaries, and we were each considering how we should change our diets in light of that new information. We tried to continue the discussion on Facebook, only to have Jacy Reese Anthis (who was not a member of the local group and AFAIK had never been to Seattle) repeatedly insist that the only acceptable compromise was vegetarianism, humane meat doesn’t exist, and he hadn’t heard of health conditions benefiting from animal products so my doctor was wrong (or maybe I made it up?). 

I wish I could share screenshots on this, but the comments are gone (I think because the account has been deleted). I’ve included shots of the post and some of my comments (one of which refers to Jacy obstructing an earlier conversation, which I’d forgotten about). A third commenter has been cropped out, but I promise it doesn’t change the context.

(his answer was no, and that either I or my doctor were wrong because Jacy had never heard of any medical issue requiring consumption of animal products)

That conversation went okay. Seattle EA discussed suffering math on different vertebrates, someone brought up eating bugs, Brian Tomasik argued against eating bugs. It was everything an EA conversation should be.

But it never happened again.

Because this kind of thing happened every time animal products, diet, and health came up anywhere on EA Facebook. The commenters weren’t always as aggressive as Jacy, but they added a tremendous amount of cumulative friction. An omnivore would ask if lacto-vegetarianism worked, and the discussion would get derailed by animal advocates insisting you didn’t need milk.  Conversations about feeling hungry at EAG inevitably got a bunch of commenters saying they were fine, as if that was a rebuttal. 

Jeff Kaufman mirrors his FB posts onto his actual blog, which makes me feel more okay linking to it. In this post he makes a pretty clear point- that veganism can be any of cheaper, or healthier, or tastier, but not all at once.  He gets a lot of arguments. One person argues that no one thinks that, they just care about animals more. 

One vegetarian says they’d like to go vegan but just can’t beat eggs for their mix of convenience, price, macronutrients, and micronutrients. She gets a lot of suggestions for substitutes, all of which flunk on at least one criterion.  Jacy Reese Anthis has a deleted comment, which from the reply looks like he asserted the existence of a substitute without listing one. 

After a year or two of this, people just stopped talking about anything except the vegan party line on public FB. We’d bitch to each other in private, but that was it. And that’s why, when a new generation of people joined EA and were exposed to the moral argument for veganism, there was no discussion of the practicalities visible to them. 

[TBF they probably wouldn’t have seen the conversations on FB anyway, I’m told that’s an old-person thing now. But the silence has extended itself]

Ignoring known falsehoods until they’re a PR problem

This is old news, but: for many years ACE said leafletting was great. Lots of people (including me and some friends, in 2015) criticized their numbers. This did not seem to have much effect; they’d agree their eval was imperfect and they intended to put up a disclaimer, but it never happened.

In late 2016 a scathing anti-animal-EA piece was published on Medium, making many incendiary accusations, including that the leafleting numbers are made up. I wouldn’t call that post very epistemically virtuous; it was clearly hoping to inflame more than inform. But within a few weeks (months?), ACE put up a disavowal of the leafletting numbers.

I unfortunately can’t look up the original correction or when they put it up; archive.org behaves very weirdly around animalcharityevaluators.org. As I remember it made the page less obviously false, but the disavowal was tepid and not a real fix. Here’s the 2022 version:

There are two options here: ACE was right about leafleting, and caved to public pressure rather than defend their beliefs. Or ACE was wrong about leafleting (and knew they were wrong, because they conceded in private when challenged) but continued to publicly endorse it.

Why I Care

I’ve thought vegan advocates were advocating falsehoods and stifling truthseeking for years. I never bothered to write it up, and generally avoided public discussion, because that sounded like a lot of work for absolutely no benefit. Obviously I wasn’t going to convince the advocates of anything, because finding the truth wasn’t their goal, and everyone else knew it so what did it matter? I was annoyed at them on principle for being wrong and controlling public discussion with unfair means, but there are so many wrong people in the world and I had a lot on my plate. 

I should have cared more about the principle.

I’ve talked before about the young Effective Altruists who converted to veganism with no thought for nutrition, some of whom suffered for it. They trusted effective altruism to have properly screened arguments and tell them what they needed to know. After my posts went up I started getting emails from older EAs who weren’t getting the proper testing either; I didn’t know because I didn’t talk to them in private, and we couldn’t discuss it in public. 

Which is the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed. 

What do EA vegan advocates need to do?

  1. Acknowledge that nutrition is a multidimensional problem, that veganism is a constraint, and that adding constraints usually makes problems harder, especially if you’re already under several.
  2. Take responsibility for the nutritional education of vegans you create. This is not just an obligation, it’s an opportunity to improve the lives of people who are on your side. If you genuinely believe veganism can be nutritionally whole, then every person doing it poorly is suffering for your shared cause for no reason.
    1. You don’t even have to single out veganism. For purposes of this point I’ll accept “All diet switches have transition costs and veganism is no different, and the long term benefits more than compensate”. I don’t think your certainty is merited, and I’ll respect you more if you express uncertainty, but I understand that some situations require short messaging and am willing to allow this compression.
  3. Be epistemically cooperative, at least within EA spaces. I realize this is a big ask because in the larger world people are often epistemically uncooperative towards you. But obfuscation is a symmetric weapon and anger is not a reason to believe someone. Let’s deescalate this arms race and have both sides be more truthseeking.

    What does epistemic cooperation mean?
    1. Epistemic legibility. Make your claims and cruxes clear. E.g. “I don’t believe iron deficiency is a problem because everyone knows to take supplements and they always work” instead of “Why are you bothering about iron supplements?”
    2. Respond to the arguments people actually make, or say why you’re not. Don’t project arguments from one context onto someone else. I realize this one is a big ask, and you have my blessing to go meta and ask work from the other party to make this viable, as long as it’s done explicitly. 
    3. Stop categorically dismissing omnivores’ self-reports. I’m sure many people do overestimate the difficulties of veganism, but that doesn’t mean it’s easy or even possible for everyone.
      1. A scientific study, no matter how good, does not override a specific person telling you they felt hungry at a specific time. 
    4. If someone makes a good argument or disproves your source, update accordingly. 
  4. Police your own. If someone makes a false claim or bad citation while advocating veganism, point it out. If someone dismisses a detailed self-report of a failed attempt at veganism, push back. 

All Effective Altruists need to stand up for our epistemic commons

Effective Altruism is supposed to mean using evidence and reason to do the most good. A necessary component of that is accurate evidence. All the spreadsheets and moral math in the world mean nothing if the input is corrupted. There can be no consequentialist argument for lying to yourself or allies1 because without truth you can’t make accurate utility calculations2. Garbage in, garbage out.

One of EA’s biggest assets is an environment that rewards truthseeking more than average. Without uniquely strong truthseeking, EA is just another movement of people who are sure they’re right. But high truthseeking environments are fragile, exploiting them is rewarding, and the costs of violating them are distributed and hard to measure. The only way EA’s has a chance of persisting is if the community makes preserving it a priority. Even when it’s hard, even when it makes people unhappy, and even when the short term rewards of defection are high. 

How do we do that? I wish I had a good answer. The problem is complicated and hard to reason about, and I don’t think we understand it enough to fix it. Thus far I’ve focused on vegan advocacy as a case study in destruction of the epistemic commons because its operations are relatively unsophisticated and easy to understand. Next post I’ll be giving more examples from across EA, but those will still have a bias towards legibility and visibility. The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet. 


Acknowledgments

Thanks to the many people I’ve discussed this with over the past few months. 

Thanks to Patrick LaVictoire and Aric Floyd for beta reading this post.

Thanks to Lightspeed Grants for funding this work. Note: a previous post referred to my work on nutrition and epistemics as unpaid after a certain point. That was true at the time and I had no reason to believe it wouldn’t stay true, but Lightspeed launched a week after that post and was an unusually good fit so I applied. I haven’t received a check yet but they have committed to the grant so I think it’s fair to count this as paid. 

Appendix

Terrible anti-meat article

  • The body of the paper is an argument between two people, but the abstract only includes the anti-animal-product side.
  • The “saturated fat” and “cholesterol” sections take as a given that any amount of these is bad, without quantifying or saying why. 
  • The “heme iron” section does explain why excess iron is bad, but ignores the risks of too little. Maybe he also forgot women exist? 
  • The lactose section does cite two papers, one of which does not support his claim, and the other of which is focused on mice who received transplants. It probably has a bunch of problems but it was too much work to check, and even if it doesn’t, it’s about a niche group of mice. 
  • The next section claims milk contains estrogen and thus raises circulating estrogen, which increases cancer risk.
    • It cites one paper supporting a link with breast cancer. That paper found a correlation with high fat but not low fat dairy, and the correlation was not statistically significant. 
    • It cites another paper saying dairy impairs sperm quality. This study was done at a fertility clinic, so will miss men with healthy sperm counts and is thus worthless. Ignoring that, it found a correlation of dairy fat with low sperm count, but low-fat dairy was associated with higher sperm count. Again, among men with impaired fertility. 
  • The “feces” section says that raw meat contains harmful bacteria (true), but nothing about how that translates to the risks of cooked meat.

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.] 

  1. ”Are you saying I can’t lie to Nazis about the contents of my attic?” No more so than you’re banned from murdering them or slashing their tires. Like, you should probably think hard about how it fits into your strategy, but I assumed “yourself or allies” excluded Nazis for everyone reading this. 

    “Doesn’t that make the definition of enemies extremely morally load bearing?” It reflects that fact, yes. 

    “So vegan advocates can morally lie as long as it’s to people they consider enemies?”  I think this is, at a minimum, defensible and morally consistent. In some cases I think it’s admirable, such as lying to get access to a slaughterhouse in order to take horrifying videos. It’s a declaration of war, but I assume vegan advocates are proud to declare the meat industry their enemy. ↩︎
  2. I’ll allow that it’s conceptually possible to make deontological or virtue ethics arguments for lying to yourself or allies, but it’s difficult, and the arguments are narrow and/or wrong. Accurate beliefs turn out to be critical to getting good outcomes in all kinds of situations.  ↩︎

Edits

You will notice a few edits in this post, which are marked with the edit date. The original text is struck through.

When I initially published this post on 2023-09-28, several images failed to copy over from the google doc to the shitty WordPress editor. These were fixed within a few hours.

I tried to link to sources for every screenshot (except the Facebook ones). On 2023-10-05 I realized that a lot of the links were missing (but not all, which is weird) and manually added them back in. In the process I found two screenshots that never had links, even in the google doc, and fixed those. Halfway through this process the already shitty editor flat out refused to add links to any more images. This post is apparently already too big for WordPress to handle, so every attempted action took at least 60 seconds, and I was constantly afraid I was going to make things worse, so for some images the link is in the surrounding text. 

If anyone knows of a blogging website that will gracefully accept cut and paste from google docs, please let me know. That is literally all an editor takes to be a success in my book and last time I checked I could not find a single site that managed it.

Seeing Like A State, Flashlights, and Giving This Year

Note (7/15/19): I’m no longer sure about Tostan as an organization. I would like to give more details on my current thinking, but they are hard to articulate and it seemed better to put up this disclaimer now than wait for my thinking to solidify.

Overview: The central premise of Seeing Like A State (James C. Scott, 1999) is that the larger an organization is, the less it can tolerate variation between parts of itself.  The subparts must become legible.  This has an extraordinary number of implications for modern life, but I would like to discuss the application to charity in particular.  I believe Tostan is pushing forward the art and science of helping people with problems that are not amenable to traditional RCTs, and recommend donating to them.  But before you do that, I recommend picking a day and a time to consider all of your options.

Legibility is easier to explain with examples, so let’s start with a few: 

  • 100 small farmers can learn their land intimately and optimize their planting and harvest down to the day, using crop varieties that do especially well for their soil or replenish nutrients it’s particularly poor in.  Large agribusinesses plant the same thing over thousands of acres on a tight schedule, making up the difference in chemical fertilizer and lowered expectations.
  • The endless mess of our judicial system, where mandatory sentencing ignores the facts of the case and ruins people’s lives, but judicial discretion seems to disproportionately ruin poor + minority lives.  
  • Nation-states want people to have last names and fixed addresses for ease of taxation, and will sometimes force the issue.
  • Money.  This is the whole point of money.

Legibility means it’s not enough to be good, you must be reliably, predictably good.*

I want to be clear that legibility and interchangeability aren’t bad.  For example, standardized industrial production of medications allows the FDA to evaluate studies more cleanly, and to guarantee the dosage and purity of your medication.  On the other hand, my pain medication comes in two doses, “still hurts some” and “side effects unacceptable”, and splitting the pills is dangerous.  

Let’s look at how this applies to altruism.  GiveWell’s claim to fame is demanding extremely rigorous evidence to make highly quantitative estimates of effectiveness. I believe they have done good work on this, if only because it is so easy to do harm that simply checking you’re having a positive effect is an improvement.  But rigor will tend to push you towards legibility.   

  • The more legible something is, the easier it is to prove its effectiveness.  Antibiotics are easy.  Long term dietary interventions are hard.
  • Legible things scale better/scaling imposes legibility.  There’s a long history of interventions with stunning pilots that fail to replicate.  This has a lot of possible explanations:
    • Survivorship bias
    • People who do pilots are a different set than people who do follow up implementations, and have a verve that isn’t captured by any procedure you can write down.
    • A brand new thing is more likely to be meeting an underserved need than a follow up.  Especially when most evidence is in the form of randomized control trials, where we implicitly treat the control group as the “do nothing group”.  There are moral and practical limits to our ability to enforce that, and the end result being that members of the “control group” for one study may be receiving many different interventions from other NGOs.  This is extremely useful if you are answering questions like “Would this particular Indian village benefit from another microfinance institution?”, but of uncertain value for “would this Tanzanian village that has no microfinance benefit from a microfinance institution?”
    • For more on this see Chris Blattman’s post on evaluating ideas, not programs, and James Heckman on Econtalk describing the limits of RCTs.

GiveWell is not necessarily doing the wrong thing here.  When you have $8b+ to distribute and staff time is your most limited resource, focusing on the things that do the most good per unit staff time is correct.

Meanwhile, I have a friend who volunteers at a charity that helps homeless families reestablish themselves in the rental market. This organization is not going to scale, at all. Families are identified individually, and while there are guidelines for choosing who to assist there’s a lot that’s not captured, and a worse social worker would produce worse results.  Their fundraising is really not going to scale; it’s incredibly labor intensive and done mostly within their synagogue, meaning it is drawing on a pool of communal good will with very limited room for expansion.

Theoretically, my friend might make a bigger difference stuffing envelopes for AMF than they do at this homelessness charity.  But they’re not going to stuff envelopes for AMF because that would be miserable.  They could work more at their job and donate the money, but even assuming a way to translate marginal time into more money, work is not necessarily overflowing with opportunities to express their special talents either.

Charities do not exist to give volunteers opportunities to sparkle.  But the human desire to autonomously do something one is good at is a resource that should not be wasted. It can turn uncompetitive uses of money into competitive ones.  It’s also a breeding ground for innovation.  GiveDirectly has done fantastically with very deliberate and efficient RCTs, but there are other kinds of interventions that are not as amenable to them.

One example is Medecins Sans Frontiers.  Leaving half of all Ebola outbreaks untreated in order to gather better data is not going to happen.  Even if it was, MSF is not practicing a single intervention, they’re making hundreds of choices every day.  85% of American clinical trials fail to retain “enough” patients to produce a meaningful result, and those are single interventions on a group that isn’t experiencing a simultaneous cholera epidemic and civil war.  MSF is simply not going to get as clean data as GiveDirectly.

This is more speculative, but I feel like the most legible interventions are using something up.  Charity Science: Health is producing very promising results with  SMS vaccine reminders in India, but that’s because the system already had some built in capacity to use that intervention (a ~working telephone infrastructure, a populace with phones, government health infrastructure, medical research that identified a vaccine, vaccine manufacture infrastructure… are you noticing a theme here?).  This is good.  This is extremely good.  Having that capacity and not using it was killing people.  But I don’t think that CS’s intervention style will create much new capacity.  For that you need inefficient, messy, special snowflake organizations.  This is weird because I also believe in iterative improvement much more than I believe in transformation and it seems like those should be opposed strategies, but on a gut level they feel aligned to me.

Coming at this from another angle: The printing press took centuries to show a macroeconomic impact of any kind (not just print or information related).  The mechanical loom had a strong and immediate impact on the economy, because the economy was already set up to take advantage of it.  And yet the printing press was the more important invention, because it eventually enabled so much more.  

I know of one charity that I am confident is building capacity: Tostan.  Tostan provides a three year alternative educational series to rural villages in West Africa.  The first 8 months are almost entirely about helping people articulate their dreams.  What do they want for their children? For their community?  Then there is some health stuff, and then two years teaching participants the skills they need to run a business (literacy, numeracy, cell phone usage, etc), while helping them think through what is in line with their values.

Until recently Tostan had very little formal data collection.  So why am I so confident they’re doing good work?  Well, for one, the Gates Foundation gave them a grant to measure the work and initial results are very promising, but before that there were other signs.

First, villages ask Tostan to come to them, and there is a waitlist.  Villages do receive seed money to start a business in their second year, but 6-9 hours of class/week + the cost of hosting their facilitator is kind of a long game. 

Second, Tostan has had a few very large successes in areas with almost no competitors.  In particular; female circumcision.  Tostan originally didn’t plan on touching the concept, because the history of western intervention in the subject is… poor.  It’s toxic and it erodes relationships between beneficiaries and the NGOS trying to help them, because people do not like being told that their cherished cultural tradition, which is necessary for their daughters to be accepted by the community and get good things in their life, is mutilating them, and western NGOs have a hard time discussing genital cutting as anything else.  But Tostan taught health, including things that touched on culture.  E.g. “If your baby’s head looks like this she is dehydrated and needs water with sugar and salt.  Even if they have diarrhea I know it seems weird to pump water into a baby that can’t keep it in, but this is what works.  Witch doctors are very good at what they do, but please save them for witch doctor problems.”  

And one day, someone asked about genital cutting.

[One of Tostan’s innovations is using the neutral term “female genital cutting”, as opposed to circumcision, which many people find to be minimizing, and mutilation, which others find inflammatory]

It’s obvious to us that cutting off a girl’s labia or clitoris with an unsterilized blade, and (depending on the culture) sewing them shut is going to have negative health consequences.  But if everyone in your village does it, you don’t have anything to compare it to.  Industrial Europeans accepted childbed fever as just a thing that happened despite having much more available counterevidence.*  So when Tostan answered their questions honestly- that it could lead to death or permanent pain at the time, and greatly increases the chances of further damage during childbirth- it was news.

The mothers who cut their daughters were not bad people.   If you didn’t know the costs, cutting was a loving decision.  But once these women knew, they couldn’t keep doing it, and they organized a press conference to say so.  To be clear, this was aided by Tostan but driven by the women themselves.

The press conference went… poorly.  A village deciding not to cut was better than a single mother deciding not to cut, but it wasn’t enough.  Intermarriage between villages was common and the village as a whole suffered reprisal.  In despair Tostan’s founder, Molly Melching, talked to Demba Diawara, a respected imam.  He explained the cultural issues to her, and that the only way end cutting was for many villages to end it at the same time.  So Tostan began helping women to organize mass refusals, and it worked.  So far almost 8000 villages in West Africa have declared an end to genital cutting, of which ~2000 come from villages that directly participated in Tostan classes (77% of villages that practice cutting that took part in Tostan), and ~6000 are villages adopted by the first set.

Coincidentally, at the same time Melching was testing this, Gerry Mackie, a graduate student, was researching footbinding in China and discovered it ended the exact same way; coordinated mass pledges to stop.  

This is not conclusive.  Maybe it’s luck that Melching’s method consistently ended female genital cutting where everyone else had failed, in a method that subsequently received historical validation.  But I believe in following lucky generals.

FGC is not the only issue Tostan believes it improves.  It believes it facilitates systemic change across the board, leading to better treatment of children, more independence for women, cleaner villages, and more economic prosperity.  But it doesn’t do every thing in every village, because each village’s needs are different, and because what they provide is responsive to what the community asks for.  So now you’re measuring 100 different axes, some of which take a long time to generate statistically significant data on (e.g. child marriage) some of which are intrinsically difficult to measure (women’s independence), and you can’t say ahead of time which axes you expect to change in a particular sample.  This is hard to measure, and not because Tostan is bad at measuring.  

That’s not to say they aren’t trying.  Thanks to a grant from the Gates Foundation, Tostan has begun before and after surveys to measure its effect.  In addition to the difficulties I mentioned above, it faces technical challenges, language issues, and the difficulty of getting honest answers about sensitive questions.  

There is a fallacy called the streetlight fallacy; it refers to looking for your keys under the lamppost, where there is light, rather than in the dark alley where you lost your keys.  The altruism equivalent is doing things that are legible, instead of following the need.  This is not categorically wrong- when it’s easy to do harm, it is correct to stay in areas where you’ll at least know if it happened.  But staying in the streetlight forever means leaving billions of people to suffer.

I believe Tostan is inventing flashlights so we can hunt for our keys in the woods.  It is hard, and it is harder to prove its effectiveness.  But ultimately it leads to the best outcomes for the world.  I am urging people to donate to Tostan for several reasons:

  1. To support a program that is object level doing a lot of good
  2. To support the development of flashlight technology that will help others do more good.
  3. To demonstrate to the warmest, fuzziest, most culturally respecting of charities that incorporating hard data will get them more support, not less.

The traditional thing to do right now to encourage you to donate would be a matching pledge.  But more than I want money moved to Tostan, I want a culture of thoughtful giving, and charity-specific matching erodes that*.  Probably its best feature is that it can overcome inertia, but it does that regardless of charity quality.  So instead, let me encourage you to put time on your calendar to decide how much and where you will donate.  Seriously, right now.  If you can’t choose a time, choose a time to choose a time.  For those with company matching and tax concerns, this is noticeably more useful if it happens before Christmas.

If you are feeling extra motivated consider hosting a donation decision day or giving game.  If you would like to publicize your event, contact me at elizabeth @ this domain and I will post it here and to any contacts I have in your city.  

I also encourage you to write up your thought process regardless of the outcome, including not donating, and including thought patterns that are very different from my own or from established orthodoxy.  For some examples, see my posts in 2014 and 2015.  I will write up a separate post with every one of these someone sends me, assuming I’m sent any at all, which is not guaranteed.

The other prosocial purpose of matching challenges is to demonstrate how important you think an organization is by spending your own money.  I am going to skip the middle man and announce my contribution now: $19,750, plus $19,750 in company matching*, for a total of $39,500  This is everything I plan on donating between now and the end of 2017.

*I have a theory that much of the misery of modern jobs is from a need to make your work legible to others, which by necessity means doing things that are expected of the position, even if you’re bad at or dislike them, and shaving off the bits that you are especially good at and other people aren’t.  You may not even be allowed to do the things you are best at, and if you are the rewards are muted because no one is in a position to notice and reward the success.  This is pretty much a recipe for making yourself miserable.  It made me miserable at a large programming house famous for treating its employees wonderfully.  I think that company’s reputation is overblown as an absolute measure, but is probably still fair on a relative one, so I can only imagine how awful working in fast food is.  This does not actually have a lot to do with the point of this essay and will probably be cut in the version that goes on Tostan’s blog, but it was too interesting not to include.

*Postpartum infections were common in births attended by a physician because washing your hands between an autopsy and a birth was considered peasant superstition.  Midwives, who followed the superstition, had a lower death rate.  This discovery languished in part because the doctor who discovered it was an asshole and no one wanted to listen to him, and that’s why I don’t allow myself to dismiss ideas from people just because I don’t like them.    

*Charity-neutral matching, like that done by many employers, mostly doesn’t, although I worry it does anchor people’s charity budgets.

*If you are wondering why the number is weird: I donated $250 to a giving game earlier this year.

Relationship disclosures:  

Tostan’s Director of Philanthropy, Suzanne Bowles, has provided assistance on this post, in the form of answering questions about Tostan and reviewing this document (although she did not have veto power).  Suzanne and I have a friendly relationship and she has made some professional introductions for me.

I have several close friends who work or have worked for GiveWell, some of whom provided comments on this essay.  

Thanks to Justis Mills for copy editing and Ben Hoffman for feedback on earlier drafts.

Review: King Leopold’s Ghost (Adam Hochschild)

King Leopold’s Ghost has the most compelling opening I have ever read

 

The beginnings of this story lie far back in time, and its reverberations still sound today. But for me a central incandescent moment,  one that illuminates long decades before and after, is a young man’s flash of moral recognition.

The year is 1897 or 1898.  Try to imagine him, briskly stepping off a cross-Channel steamer, a forceful, burly man in his mid-twenties, with a handlebar mustache.  He is confident and well spoken, but his British speech is without the polish of Eton or Oxford.  He is well dressed, but the clothes are not from Bond Street.  With an ailing mother and a wife and growing family to support, he is not the sort of person likely to get caught up in an idealistic cause.  His ideas are thoroughly conventional.  He looks-and is- every inch the sober, respectable business man.

Edmund Dene Morel is a trusted employee of a Liverpool shipping line.  A subsidiary of the company has the monopoly on all transport of cargo to and from the Congo Free State, as it is then called, the huge territory in central Africa that is the world’s only colony claimed by one man.  That man is King Leopold II of Belgium, a ruler much admired throughout Europe as a “philanthropic” monarch.  He has welcomed Christian missionaries to his new colony; his troops, it is said, have fought and defeated local slave-traders who preyed on the population; and for more than a decade European newspapers have praised him for investing his personal fortune in public works to benefit the Africans.

Because Morel speaks fluent French, his company sends him to Belgium every few weeks to supervise the loading and unloading of ships on the Congo run.  Although the officials he works with have been handling this shipping traffic for years without a second thought, Morel begins to notice things that unsettle him.  At the docks of the big port of Antwerp he sees his company’s ships arriving filled to the hatch covers with valuable cargoes of rubber and ivory.  But they case off their hawsers to steam back to the Congo, while military bands play on the pier and eager young men in uniform line the ships’ rails, what they carry is mostly army officers, firearms, and ammunition.  There is no trade going on here.  Little or nothing is being exchanged for the rubber and ivory.  As morel watches these riches streaming to Europe with almost no goods being sent to Africa to pay for them, he realizes there can be only one explanation for their source: slave labor.

Brought face to face with evil, Morel does not turn away.  Instead, what he sees determines the course of his life and course of an extraordinary movement, the first great international human rights movement of the twentieth century.  Seldom has one human being- impassioned, eloquent, blessed with brilliant organizing skills and nearly superhuman energy- managed almost single-handedly to put one subject on the world’s front pages for more than a decade.  Only a few years after standing on the docks of Antwerp, Edmund Morel would be at the White House, insisting to President Theodore Roosevelt that the United States had a special responsibility to do something about the Congo.  He would organize delegations to the British Foreign Office.  He would mobilize everyone from Booker T. Washington to Anatole France to the Archbishop of Canterbury to join his cause.  More than two hundred mass meetings to protest slave labor in the Congo would be held across the United Sates.  A larger number of gatherings in England- nearly three hundred a year at the crusade’s peak- would raw as many as five thousand people at a time.  In London, one letter of protest to the Times on the Congo would be signed by eleven peers, nineteen bishops, seventy-six members of Parliament, the presidents of seven Chambers of Commerce, thirteen editors of major newspapers, and every lord mayor in the country.  Speeches about the horrors of King Leopold’s Congo would be given as far away as Australia.  In Italy, two men would fight a duel over the issue.  British Foreign Secretary Sir Edward Grey, a man not given to overstatement, would declare that “no external question for at least thirty years has moved the country so strongly and so vehemently.”

This is the story of that movement, of the savage crime that was its target, of the long period of exploration and conquest that preceded it, and of the way the world has forgotten one of the great mass killings of recent history.

This kind of thing is my heroism porn. Most movies are about people that set out to be heroes; they look at the costs and benefits and decide it is a trade off worth making.  That is great, and I don’t want to diminish it.  But they can build their lives around it, and that does reduce the costs.  What I find most affecting is people that were living ordinary lives who encounter something they cannot let stand, and don’t.  It was particularly touching in the case of Morel, who didn’t have to know what he knew.  Lots of people were on that dock and didn’t know or didn’t care.  He figured it out and switched tracks in his life when it would have been easy to pretend everything was okay. Everyone I talked to for the last two weeks heard how beautiful I found that.   I used the story to talk myself into doing things that were a little bit hard because they were so much less hard than what Morel did.

Here’s the story I told:  Under a humanitarian guise that fooled most Europeans at the time, Leopold created a form of slavery even worse than that of North America or even the Caribbean.  Men were worked to death attempting to free their wives and children from slavery.  Against that, Edward Morel and and increasing number of allies publicize the atrocities until Leopold backs down.  

This would be a really good story, and it’s what I thought was happening for most of the book, even while my knowledge that the modern Congo isn’t all sunshine and roses gnawed at me.  

In the last hour, it gets more complicated.     Yes, slavery went away and the rubber harvest (driver of much of the atrocities) declined.  But… the rubber decline could have been caused entirely by cultivated rubber farms coming online.  And Belgium may have stopped anything called slavery, they got about the same amount of financial value for about the same amount of violence out of their taxation system.  I realize the phrases “taxation is slavery” and “taxation is theft” are fairly loaded, but I think everyone can agree that people coming in from elsewhere to demand taxes and provide nothing of value to their subjects is Bad.  

And while there are the statistics that make the Congo look particularly bad, they’re mostly an artifact of size.  Per capita the other European powers in Africa were just as bad, and at the same time England (Morel’s home) was exterminating aborigines in Australia and America was going scorched earth on the Philippines (plus its usual policy towards American Indians).  

I could forgive Morel for advocating for a gentler form of colonialism.  People can only be so much better than their time, and a more correct person possibly couldn’t have accomplished as much because no one would listen to them. But my admiration for this man was very tied to the fact that he saw something he didn’t have to see, and chose to pursue it.  If he was blinding to himself to similar atrocities closer to home- especially when a great deal of African colonization, including Leopold’s rape of the Congo, was done under the guise of protecting Africans from Arab Slave Traders.

We don’t know Morel did nothing.  He went on to lead the pacifist movement against WW1, which was probably the right side but it’s even harder to argue he changed history for the better there.  But we don’t know he did something either.

This is a disappointing ending for a man I was well into planning how to get a Petrov-day style holiday.  He did better than average at seeing the horrors in front of him, but still not the ones that were done by his in-group.  It’s debatable if he accomplished anything.  He still sacrificed a lot, but I’m not prepared to valorize that alone.  It’s not even a good effective altruist cautionary tale because even with 100 years of hindsight it’s not clear what he could have done better.  Even focusing on Leopold’s horrors instead of England’s might have been the correct decision, since it let him gather stronger allies.

The book is beautifully written and read.  For whatever reason I was sadder and less callous listening to this than I am to most atrocities- maybe it was the writing, maybe because it was entirely new to me and I hadn’t had time to develop a shell.  And as heartbroken as I was to have my new hero brought down, I really admire the book for being willing to include that complexity when it could have gotten away with ignoring it.  So I can’t recommend it highly enough, assuming you want to be very sad.

Parenthetical Reference ends three or four undending debates in or at EA in a single stroke.

 

…it’s the difference between “tzedakah”, which is a mitzvah/dedication I have to making the world better and where EA analysis is really important, and “generosity”, which is about being kind to the people around me.

Generosity is when my friend’s family has a health crisis and I come over with $100 worth of takeout and frozen food. It’s also generosity when I support my local arts and/or religious communities, and when I go out of my way to financially support free media. Generosity is good and we should feel good about it. It’s one of the ways we live our values. It can be personal and subjective and can be about feelings as much as ROI. In fact, it is inherently subjective, and the right specific generous acts should be different for different people, because they are distributed like tastes, interests, friendships, communities, and other personal attachments.

Tzedakah is deciding to donate 10% of my income to saving lives in the developing world, and doing my research to make sure it’s doing as much good as possible. Tzedakah is saying BED NETS BED NETS BED NETS. Tzedakah is a sense of urgency to make the world better for people I will never meet and who will never know or care about me personally.2 Tzedakah isn’t a corner I want to cut to buy something nice for myself.3

“What about the arts?” Sure, generosity.  But don’t cut your bednet budget for it.
“Donating based on numbers ruins the make-the-donor-a-better-person function of charity.” It arguably taints generosity but not tzedakah.
“I don’t need to feel guilty not donating to help my friend’s cousin coming back from Iraq because it’s more effective to…” No, you don’t need to feel guilty because when and how to be generous is personal choice.  Stop arguing it’s objectively wrong.

I’m so glad we could clear this up

Spreading the Wealth Around

Conventional effective altruism wisdom is that however much money you are donating, you should give 100% to the best charity, because it is the best.  I think that is one perfectly good choice among several.  Until recently my explanation was “the estimated difference in effectiveness between these charities is many orders of magnitude smaller than the confidence interval of the estimates, so they are functionally the same, so I might as well do what makes me happy.”  Scope insensitivity makes donating $n to two charities twice as satisfying as $2n to one charity.  I would have given to several more charities this except my job matches donations by hand and the admin has other shit to do.  But recently I realized it is more complicated than that.

Synergies

I spend a lot of time reminding people that estimates of genetic influence and heritability are only valid for the environment in which they are measured.  The same is true for charitable interventions.  The effects of any one intervention depend on the environment, which depends in part on other interventions.

Free condoms and instruction on their use doesn’t appear to make a big difference in teen pregnancy- but that study measured a single free condom program that existed in an environment with lots of existing programs.   Anyone who wanted condoms already had them.  That doesn’t mean such a program wouldn’t be useful in a population with no knowledge of condoms.

Interventions are synergistic.  Tostan’s educational programs won’t do much for anyone who died of malaria, but I’m also not excited about saving infants from death only to spend their entire lives in misery.  We could run around funding whichever need is most dire at any given moment, but organizations are costly to set up and a lot is lost when they disband.  Keeping the operational capital of the second and third best things live will let us react faster when we hit diminishing returns on the first.

And that’s when we know what to do.  Tostan and even GiveDirectly are very much works in progress, and because Tostan is so complex and culturally specific it’s slow to scale.  GiveDirectly can scale much faster, but too fast and corruption will become an enormous problem.  If we want those solutions ready to go when disease and nutrition are solved, we have to work on them now.  And that’s before taking into account the synergies.

Predictability

100 small donors each dividing their donations among 5 charities is better for the charities than 20 small donors giving 100% to five different charities, because it’s more stable.  If a minuscule change in numbers causes half your donors to abandon your cause (and maybe come back two years later), your funding will swing wildly.  This is terrible for operational capital.

Risk of Neglect

And that’s assuming favorites are properly distributed.  If there’s an organization or cause that’s everyone’s second choice it should probably get some money, but under a favorites only system it never will.  My source at Effective Altruism Outreach says that’s exactly what the recent EA survey showed is happening to metacharities*; everyone has their favorite real cause, and then likes metacharities.  I’ve increased my estimate of metacharities’ value recently**, so I now think they’re underfunded, so this seems bad.

If you’re a very large donor none of this applies to you because you’re in a position to change charities’ behavior rather than just react to it.  If you’re a small donor who’s happiest donating to the charity with the single best numbers, keep going, I don’t think you’re doing any harm at the scale you currently operate on.  But if you’re like me or Brian  and will have more fun spreading your donations around, I think you’re doing a good thing and shouldn’t change.

 

*The publicly accessible survey summary doesn’t give numbers for individuals’ second choices.   This is still a good example if it’s not literally true so I haven’t bothered looking up the numbers, although I should do so before I actually donate to metacharities.

**I’ve also increased the number of friends I have working at metacharities.  This means I hear about the really cool stuff they do that can’t be publicized, but also that I’m more likely to be suffering from a halo effect or cognitive dissonance or simply a desire for my housemates to have more money because hiring a cleaner would make everyone’s lives easier.

2015 donations

GiveWell: $6,000
I needed to use up company matching quickly before quitting and GiveWell is never a bad choice.
Raising for Effective Giving: $5,000
I am generally fairly skeptical of fundraising charities, especially fundraising charities targeting EAs.  Their mechanisms for evaluating effectiveness seem pretty weak (e.g. Giving What We Can counting donations pledged to be made in 40 years by people who have never donated once).*  That doesn’t mean they’re ineffective or can’t become effective, but I didn’t sign up for this movement to throw money and hope it was at the right place.

REG is different. First, they started with an extremely specific mission: convincing poker players to pledge small portions of their winnings to effective charities. This is a group that was donating minimally before, and is much more susceptible to quantitative arguments than the general population.  They count only money already donated, not pledges. And their plans for expansion seem similarly crafted for very specific niches.  (More or less the same model for fantasy football, microtransactions in video games).  The money raised goes to charities I like

Tostan: $5000
Continuing education courses in small African villages teaching things like literacy, numeracy, cell phone usage, basic medical info, human rights… a lot of different stuff, basically. That makes them hard to measure, and they turned down GiveWell when they tried. But! They did share their data with someone at the Gates Foundation, who found them extremely effective, and I trust his judgement. My focus isn’t “who is producing the highest numbers right now”, it’s “who has the best system for improving themselves and is aimed at the right thing.” Tostan’s classes grew out of requests from the community, so it some ways this is the continuing ed version of GiveDirectly.

That said, I’m working on getting numbers from them. There’s a few different charities I’ve given money to that called me to thank me and ask for my input on their long term plans. My response is usually “but I gave you the money on the assumption you were better at curing poverty than me”, but this year I’m hoping to leverage it into getting them to talk to one of the evaluator charities.** It is not my only plan for accomplishing this, but it seemed worth a shot. And I’d like to offer that as an argument for donating to charities that do things uniquely right while falling down in other ways: once they’re paying attention to you you can nudge them to do better.

 

*This was uncomfortable to write given that I have friends that work at fundraising charities, but I think they will understand that that is why I had to publish it.

**Specifically Giving What We Can, whose wildly optimistic numbers could theoretically be part of the puzzle that gets Tostan to publish more public data.  I’m also trying to get Treehouse to talk to Impact Matters, on the strength of last year’s donation.

Talking about controversial things (discussion version)

There is a particular failure pattern I’ve seen in many different areas.  Society as a whole holds view A on subject X.  A small sub-group holds opposing view B.   Members of the sub group have generally put more thought into subject X and they have definitely spent more time arguing about it than the average person on the street.  Many A-believes have never heard of View B or the arguments for it before.

A relative stranger shows up at a gathering of the subgroup and begins advocating view A, or just questioning view B.  The sub-group assumes this is a standard person who has never heard their arguments and launches into the standard spiel.  B doesn’t listen, A gets frustrated and leaves the subgroup, since no one is going to listen to their ideas.

One possibility is that the stranger is an average member of society who genuinely believes you’ve gone your entire life without hearing the common belief and if they just say it slowly and loud enough you’ll come around.*  Another possibility is they understand view B very well and have some well considered objections to it that happen to sound like view A (or don’t sound that similar but the B-believer isn’t bothering to listen closely enough to find out).  They feel blown off and disrespected and leave.

In the former scenario, the worst case is that you lose someone you could have recruited.  Oh well.  If the latter, you lose valuable information about where you might be wrong.  If you always react to challenges this way you become everything hate.

For example: pop evolutionary psychology is awful and people are right to ignore it.  I spent years studying animal behavior and it gave me insights that fall under the broad category of evopsych, except for they are correct.  It is extremely annoying to have those dismissed with “no, but see, society influences human behavior.”

Note that B doesn’t have to be right for this scenario to play out.  Your average creationist or anti-vaxxer has thought more about the topic and spent more time arguing it than almost anyone.  If an ignorant observer watched a debate and chose a winner based on fluidity and citations they would probably choose the anti-vaxxer.  They are still wrong.

Or take effective altruism.  I don’t mind losing people who think measuring human suffering with numbers is inherently wrong.  But if we ignore that entire sphere we won’t hear the people who find the specific way we are talking dehumanizing, and have suggestions on how to fix that while still using numbers.  A recent facebook post made me realize that the clinical tone of most EA discussions plus a willingness to entertain all questions (even if the conclusion is abhorrent) is going to make it really, really hard for anyone with first hand experience of problems to participate.  First hand experience means Feelings means the clinical tone requires a ton of emotional energy even if they’re 100% on board intellectually.  This is going to cut us off from a lot of information.

There’s some low hanging fruit to improve this (let people talk before telling them they are wrong), but the next level requires listening to a lot of people be un-insightfully wrong, which no one is good at and EAs in particular have a low tolerance for.

Sydney and I are spitballing ideas to work on this locally.  I think it’s an important problem at the movement-level, but do not have time to take it on as a project.**  If you have thoughts please share.

*Some examples: “If you ate less and exercised more you’d lose weight.”  “If open offices bother you why don’t you use headphones?”, “but vaccines save lives.”, “God will save you…”/”God isn’t real”, depending on exactly where you are.

**Unexpected benefit of doing direct work: 0 pangs about turning down other projects.  I can’t do everything and this is not my comparative advantage.

The Giving What We Can Pledge

On one hand, I think the  Giving What We Can pledge (10% of your income to the most effective charities) is an excellent idea and I’d be thrilled if me plugging it led to an additional pledge.  On the other hand, I haven’t signed it and don’t plan on doing so.  This makes me feel kind of awkward suggesting other people do.

I have many aborted paragraphs written about why I think the pledge is a good idea but not for me, but in retrospect they’re mostly fluff.  It boils down to: I have a strong need to create my own number.

Scott Alexander talks about how satisfying having A Number that he can reach and then feel done is to him.  This seems extremely valuable, and 10% seems like a reasonable number.  But it doesn’t do that for me.  If I had billion dollars I’d need to give more, and if I accepted a job that paid $20,000/year but saved the bottom billion, I would hang on to that $2,000, thank you very much.  Not just because I earned it, but because spending the money on myself would actually do more good for the world, by freeing up my time and energy.

In order to feel done, I need to exhaustively examine my income, spending, and choices. That means the pledge can’t possibly save me work or increase my emotional satisfaction.  But if I signed it and circumstances arose such that me giving less than 10% was the right thing to do, I would feel awful about breaking it.  To the point I might subconsciously prevent myself from even considering the option.*

The other common reasons I hear for signing are to push better behavior in future!you, to create a community of giving, and as a useful conversation starter.  These are all excellent goals.  In my particular case, future!Elizabeth has been so consistently smarter and kinder than past!Elizabeth that think she will make better decisions than me and I don’t want to constrain her.  Given that, I think I’ll do a better job contributing to a culture of giving by fostering a culture of deep thought around giving (which not everyone will or should participate in).***

So basically, signing the Giving What We Can pledge is incompatible with my version of scrupulosity.  But it might be extremely compatible with yours.  If the idea of having a target and then being done appeals to you, I highly suggest you consider signing.  But if having a hard target feels awful and spending  several hours thinking about exactly much to donate feels fun or satisfying, consider coming to Seattle EA’s donation decision day (in person, but we’ll create a virtual meeting room if there’s interest) or creating your own.

 

*I did in fact take a pay cut to work for Sendwave, which enables people to send money to their families in Africa cheaply and easily.  I am going back and forth on how that affects giving.  Last year I did 10% + offsets for things with negative externalities (eating meat and my bullshit patent).  This year I  took a way more than 10% paycut to do way, way more good than I could possibly have done with donations.  So in a certain sense I’ve already given 10% of my potential income and could consider that obligation met. On the other hand, I would have accepted the same pay from puppies-killing-kittens.org if it meant working from home**, so there’s a strong argument that doesn’t count against my born-lucky tax.  On the third hand, I’m starting with UI testing and I hate UI testing, so doesn’t that count for something?  On the fourth hand, in the grand scale of human suffering, no, it does not.

The plan I made last year means donations this year are against last year’s tax return, so for now I can just follow that.  Except some of it will be in January so I can use more employer matching.  But I don’t know what I’m going to do next year.

**Which is everything I ever dreamed it could be.

***Obviously they’re not mutually exclusive.  Unit of Caring has pledged 30% and contributes a fabulous amount to discourse.

Mountains Beyond Mountains (Tracy Kidder) and the case against cost effectiveness

Mountains Beyond Mountains is a biography of Paul Farmer, an American doctor who founded a small clinic in Haiti and ended up saving hundreds of thousands of lives, possibly millions.  But that’s at the end.  The beginning is spent with him  doing obviously suboptimal things like spending $5000/year per patient treating AIDS patients in a country where people were constantly dying of malnutrition and diarrhea (average cost to treat: <$30), while baiting me by bragging about how cost ineffective it was.  I was very angry at him for a while, until it dawned on me that getting angry at a man for distributing lifesaving drugs probably said more negative things about me than it did about him.  Plus he did maybe avert a worldwide epidemic of untreatable tuberculosis, so perhaps I should stop yelling at the CD player and figure out what his thinking was.

Let’s take tuberculosis.  At some point in the story (it’s frustratingly vague on years), Farmer’s friend brings him to Peru, which had what was widely considered the world’s best TB containment system (called DOTS).  The WHO held it up as what the rest of the world should aspire to.  DOTS did many things right, like ensuring a continuous supply of antibiotics to patients and monitoring them to ensure compliance (intermittent treatment breeds resistance).    On the other hand, the prescribed response to someone failing to get better on drugs (indicating their infection was resistant to them) was “give them all the same, plus one more.”  This is exactly the right thing to do, if what you want is to make sure the bacteria evolve resistance to the new drug without losing its existing resistance.  The protocol specifically banned testing to see which drugs a particular patient’s infection was susceptible to, because that is expensive and potentially difficult in the 3rd world.

The WHO ignored the possibility of drug resistant TB because it was considered an evolutionary dead end.  Resistance had to evolve anew in each patient, and rendered the infection noncontagious.  I don’t know what evidence they based this on, but  at best it was a case of incorrectly valuing evidence over reason.  At worst, it indicates a giant sentient TB cell has infiltrated the WHO and is writing policy.

Hello fellow humans. I sure do hate death and suffering.
Hello fellow humans. I sure do hate death and suffering.

If your evidence says a contagious disease spontaneously becomes completely noncontagious at a convenient moment, your first thought should be “who do I fire?” because obviously something went wrong in the study design or implementation. If you check everything out and it genuinely is noncontagious, your next thought should be “wow, we really lucked out having all this extra time to prepare before it inevitably becomes contagious again.”  At no point should it be “sweet, cross that off the list”, because while you are not looking it will redevelop virulence and everyone will die.

Farmer’s response to the ban on treating multiple drug resistant TB was to steal >$100,000 worth of medicine and tests from an American hospital to treat a handful of patients.  When caught, a donor bailed him out.

Or take cancer.  A young child with weird symptoms shows up at his clinic.  He drops a few thousand dollars on tests to determine the child had a rare form of cancer.  60-70% survival rate in the US, no chance in Haiti.  He convinced an American hospital to donate the care, but when the child becomes too ill to travel commercially he spends $20,000+ on medical transport.

Both of these went against standard measures of cost effectiveness, as did Farmer’s pioneering work treating AIDS patients in the 3rd world.  But let’s look at his results:

  1. The WHO’s “yeah, it’s probably fine” approach to drug resistant TB worked as well as you would think.  Farmer proves it is contagious, is treatable, and drives down the price of treatment (to the point it is $/DALY competitive with standard health interventions). He goes on to change the international standard for TB treatment and lead the effort in several countries. Book gives no numbers but I estimate 142,000 lives and counting, plus avoiding an epidemic that could cost 2 million lives/year.*
  2. Kid’s cancer is inoperable, he dies in the US.  American hospital agrees to take a few more cases each year.

In retrospect his actions in the TB case seem pretty damn effective.  But he didn’t set out to change the world.  He stole those drugs for the exact reason he flew that boy to the US: someone was dying in front of him and it made him sad.  It’s possible you couldn’t correct his answer in the cancer case without also “correcting” his answer in the TB case. And while someone more math driven could have launched the world changing anti-MDR TB campaign, they didn’t.  Farmer did, and we need to respect that.

Lots of people in the philanthropic space, including Farmer and most EAs, say that it’s unreasonable to expect perfect altruism from everyone.  People need to spend money on themselves to keep themselves happy and productive, and constant bean counting about “do the morale effects of name brand toilet paper make up for the kids I won’t be able to deworm?” is counterproductive.   You put aside money for charity, and you put aside money for you to enjoy life, and you make your choices.  What if we view Farmer’s need to save the life in front of him as a morale booster that enables our preferred work (averting world wide incurable TB pandemic), rather than the work itself?  By that measure,  $150,000 on a single kid’s cancer and 7 hours doing a house call for one patient in Haiti is a steal.  Given that I pay my cats more (in food and vet care) than what 1/6 of the world survives on, I do not have a lot of room to judge Paul Farmer’s “saving children from cancer” hobby.

Farmer himself raises this point, in a way.  It turns out that effective altruism did not invent the phrase “that’s not cost effective.”  Lots of people with a lot of power (e.g. the WHO) have been saying it for a long time.  From Farmer’s perspective, it seems to be used a lot more to justify not spending money, rather than spending it on a different thing.  He also rejects the framing of the comparison: cancer treatment may save fewer lives per dollar than diarrhea treatment but it saves way more per dollar than a doctor’s beach house, so how come it only gets compared to the former?  Those are fair points.

It’s not clear he could have redirected the money even if he wanted to.  Most of the care for the cancer patient was donated in kind; there was no cash he could redirect to a better cause (although that’s not true for the cost of the medivac).  No one gave him $100,000 to spend on TB treatment, he stole drugs and got bailed out.  It’s not clear the donor would have been as moved to rescue him if he stole $100,000 worth of cheap antibiotics.

In essence, I’m postulating that Farmer operated under the following constraints.

  • Evaluating cost effectiveness is emotionally costly even in the face of very good information.
  • Low quality information on effectiveness
  • Financial discipline was emotionally costly
  • Some money was available for treatments of less relative effectiveness but could not be moved to the most effective thing.  But the money was not clearly labeled “the best thing” and “for warm fuzzies only”, he had to guess in the face of low information.

Under those things, evaluating cost effectiveness could easily be actively harmful.  Judging by the results, I think he did better following his heart.

Doing The Most Effective Thing is great, and I think the EA movement is pushing the status quo in the right direction. But what Farmer is doing is working and I don’t want to mess with it.  At the same time, his statement that “saying you shouldn’t treat one person for cancer because you could treat 10 people for dysentery is valuing one life over another.” (paraphrased) is dangerously close to Heifer International’s “we can’t check how our programs compare to others, that would be experimenting on them and that would be wrong.” (paraphrased), which is dangerously close to Play Pump’s “fuck it, this seems right.” (paraphrased) as they rip out functional water pumps and replace them with junk.  So while Farmer is a strong argument against Effective Altruism as “the last social movement we will ever need” (because some people do the most good when they don’t compare what they’re doing to the counter-factuals), he’s not an argument against EA’s existence.  Someone has to run the numbers and shame Play Pump until they stop attacking Africans’ access to water.

It doesn't even make sense as a toy. The fun part of carousels is the coasting, not the wind up
It doesn’t even make sense as a toy. The fun part of carousels is the coasting, not the wind up

And just like you couldn’t improve Farmer by forcing to him do accounting, you can’t necessarily improve a given EA by making them sadder at the tragedy immediately in front of them.  EA is full of people who didn’t care about philanthropy until it had math and charts attached, or who find doing The Best Thing motivating.  We’re doing good work too. I understand why people fear doing the math on human suffering will make them less human, but that isn’t my experience.  I cry more now at heroism and sacrifice than I ever did as a child.

Ironically the one thing I was still angry at Farmer for at the end of the book was his most effective choice: neglecting his children in favor of treating patients and global health politics.  I could forgive it if he felt called to an emergency after the children were born, but he had kids knowing he would choose his work over them.  For me, no amount of lives saved can redeem that choice.  Maybe that is what he feels about letting that Haitian child die of cancer.

*This paper (PDF) estimates 142,000 deaths averted 2006-2015 by the program Farmer pioneered, and the program is still scaling up.    I estimate a drug resistant TB epidemic would cost a minimum 2.3 million lives/year (math below), although how likely that is to occur is a matter of opinion.  That’s ignoring his clinical work in Haiti, the long term effects of his pioneering HIV treatment in the 3rd world,the long term effects of his pioneering community-based interventions that increased treatment effectiveness,  infrastructure building in multiple countries, and refugee care.  I would love to give you numbers for those but neither Paul Farmer nor PIH believe in numbers, so the WHO evaluation of the TB program was the best I could do.

MATH

Untreated TB has a mortality rate of ~70%l

TB rates have been dropping since ~2002, but that’s due to aggressive
treatment. New infection rate held steady at ~150 people/100,000
from 1990-2002, so let’s use that as our baseline. With 7.3 billion people, that’s 11 million cases/year. 11 million * .7 chance of death = 7.7 million deaths. Per year. 25% of those are patients with AIDS who arguably wouldn’t live very long anyway, so conservatively we have ~5.7 million deaths. If I’m really generous and assume complete worldwide distribution of the TB vaccine (efficacy: 60%), that’s down to 2.3 million deaths. Per year.

For comparison, malaria causes about 0.5 million deaths/year.

Thanks to Julia Wise for recommending the book, Jai Dhyani and Ben Hoffman for comments on earlier versions, and Beth Crane for grammar patrol.