FTX, Golden Geese, and The Widow’s Mite

From 2019 to 2022, the cryptocurrency exchange FTX stole 8-10 billion dollars from customers. In summer 2022, FTX’s charitable arm gave me two grants totaling $33,000. By the time the theft was revealed in November 2022, I’d spent all but 20% of it. 

The remaining money isn’t mine, and I don’t want it. I would like to give this money to the FTX estate, but they are not returning my calls. If this post fails to get their attention in the next month, I will donate the money to Feeding America. In the meantime, I’d like to talk about why I made this decision, and why I think other people should do likewise.

Background

FTX was a crypto-and-derivatives exchange that billed itself as “the above board, by the book legitimate, exchange.” Several of its executives were members of Effective Altruism, a movement based on ruthlessly prioritizing donations to do the greatest good for the greatest number. EA’s presence in FTX was strong enough that FTX booked an ad campaign around CEO Sam Bankman-Fried’s intent to spend his wealth on good causes.

He is now serving a 25 year prison sentence for fraud. 

Starting in 2021, FTX began to firehose money. $93m to political causes (some of which was probably buying favorable regulation) and $190m to explicitly philanthropic ones. The donations include domains like AI safety (e.g. $5m for Ought, which aims to make humans wiser using AI), biosecurity ($10m for HelixNano, which develops vaccine and other anti-infection tech), and Effective Altruism (e.g. $15m for Longview Philanthropy, itself a fundraising org). Donations also probably went to animal welfare organizations and global development, but these were made by a different branch of the FTX Foundation and there’s no clear documentation.

Some of that philanthropic money was distributed through a regrantor program that authorized agents to make grants on their own initiative, with some but not much oversight from FTX. It funded things like the memoirs of someone who worked on Operation Warp Speed, many independent AI safety researchers, and in my case, a project to find or train new research analysts who could do work similar to mine, or assistants to help them.

After the bankruptcy, I waited to be contacted by the FTX estate asking for their money back. Under US bankruptcy law I was outside the 90-day lookback period in which clawbacks are easy, but within the two-year period where they were possible. I did receive one email claiming to be from the estate, but it had a couple of oddities suggesting “scam” so I ignored it, and never received any follow-up. In November 2024, the statute of limitations for clawbacks passed, and with it, any legal claim anyone else had on the money. 

For the next few months, I did nothing. Everyone I knew was keeping their money and seemed very confident that this was fine. And all things being equal, I like money as much as the next person.

But I couldn’t stop picturing myself trying to justify the choice to keep the money to a stranger, and those imaginary conversations left me feeling gross. None of my reasons seemed very good. When I finally entertained the world where I returned the money voluntarily, I felt hypothetically proud. So I decided to give it back, or at least away.

“Avoid tummy aches” isn’t exactly a moral principle. Avoiding my tummy aches is especially not a principle I can ask others to follow. But in the course of arguing with my friends who didn’t think I should give away the money, and trying to figure out where I should donate, I eventually figured out the rules I was implicitly following, and what I would ask of other people.

Protecting the Golden Goose

The modern, high-trust, free-market economy is a goose laying golden eggs. It has moved the subsistence poverty rate from 100% to 47%, lowered urban infant mortality from 50% to less than 1% in developed countries. It brings a king’s ransom in embroidery floss directly to my house for a fraction of an hour’s wage.

This is $30, and I’m disgusted because it’s not pre-loaded onto bobbins.

The most important thing in the world after extinction-level threats is to keep this goose happy and productive, because if the goose stops laying, then we don’t have any gold to spend on things like vaccine cold chains or cellular data networks. Every theft gives a little bit of poison to the goose. A norm that you can steal if you have a good reason will kill the goose, and then we will be back to the nasty, brutish, and short lifespans of our agricultural ancestors. This is true even if your reason is really, really, really good. 

Given how damaging theft is to the goose, it’s important to keep the incentives to steal as low as possible. One obvious way is to not let thieves keep the money. For most thieves this is enough, because having the money themselves was the whole point. But in this weird case where the theft was at least partially to fund philanthropic projects, it’s important to not spend the money on those projects. 

That ship has mostly sailed, of course. Even if I gave back/away all the money FTX gave me, I still did a bunch of work they wanted. Giving the money away doesn’t erase the work, and would violate another principle in the care and feeding of golden geese, that people get to keep what they earned.

But I only spent 80% of the money (some on my own salary, some on researchers I was trialing). The other 20% wasn’t earned by me or anyone else. I could earn it now, with a new project- my old project had wrapped up but my regrantor had given permission to redirect to anything reasonable before the bankruptcy. I have a long backlog of projects; it wouldn’t be hard to just do one and conceptually bill it to FTX. But given that I had FTX’s (indirect) blessing on arbitrary projects that means doing any of them would reward the theft. 

(If it seems crazy to you that FTX executives genuinely believed they stole for the greater good and all that altruism wasn’t just a PR stunt, keep in mind that they believed the world was at risk of total annihilation in 5-10 years due to artificial superintelligence. I also know some people who knew some people, and they’re really sure that at least some of the executives were sincere at least at the start.)

Having decided I can’t keep it, where should the money go? Obviously the best place would be the victims of FTX’s theft. The only way I know of to give to them is via the FTX estate. The estate has an email address for people who wish to voluntarily return money but I guess they’re not checking it, because I’ve been emailing them for months with no reply.

Some of you may argue that the FTX users are already going to be made whole, in fact 120% of whole, because FTX’s investments did well and the estate will be able to pay all the claims. This is technically true, but it uses the valuation of crypto assets at the time of bankruptcy. Since then bitcoin has 6xed; 20% doesn’t begin to cover the loss. It might not even cover inflation + compound interest.

My next choice was to donate to investigative journalism in crypto- if I couldn’t redress crypto theft, maybe I could prevent it. Unfortunately, there doesn’t seem to be anyone who’s good at this, still working, and will accept donations higher than $5/month. You might think, “Surely he would accept larger amounts if offered, even if he doesn’t list it on his website,” but no, my friend tried to give him money months ago, and he refused. And there was no second choice.

If I can’t give it to the victims or prevent future victims, my third choice was wherever it would do the most good, in an area FTX Foundation didn’t value (so as to not reward theft). This is hard because while FTX funded a lot of stupid things, they also covered a long list of good things. I, too, hate AI risk and deadly pandemics. After sampling a bunch of ideas, I eventually settled on Feeding America. Feeding hungry people may not have the highest possible impact, but it’s hard to argue that it’s not helping. FTX never hinted at caring much about American poverty. I don’t know anyone involved with Feeding America, so there’s no possibility of self-enrichment. And 10 years ago, I heard a great podcast on how they used free-market principles to make their food distribution vastly more efficient. 

I don’t feel amazing about this choice. I don’t think amazing was an option once the FTX estate declined my offer. But I feel good enough about this, and there’s no good way to optimize when you’re specifically trying to thwart optimizers like the FTX executives. All I can do is make sure I’m living up to my principles and make some people a little less hungry. 

The Widow’s Mite Isn’t Worth Very Much

Do I think other people are obligated to give away their FTX grants? The answer is closer to yes than no, but not without complications. 

I think people should give back/away FTX money they hadn’t already spent or earned. But I take a liberal definition of spending and earning. If I hadn’t paid my taxes on those grants at the time of the bankruptcy, I’d still consider the taxes already spent, because accepting the money committed me to paying them (although FTX told me I didn’t need to pay taxes on the grant. This is the clearest sign I received that Something Was Wrong with the FTX Foundation, and to my shame, I ignored it as standard Effective Altruism messiness). I know someone who quit their job and moved countries on the assumption that the FTX money would always be there, and while I think that was a stupid decision even absent the fraud, the cost of moving back home and reestablishing her life counts as “already spent.” She might have to give back something, but accepting the grant and assuming its good faith shouldn’t come with a bill.

But it was not random happenstance that it was easy for me to drop my FTX-funded project on a dime when scary rumors started. I work as a freelancer, sometimes balancing many projects from many clients and sometimes having none but my own (which necessitates a healthy cushion of savings). So when the word came down that FTX was at risk and the responsible thing to do was to stop spending their grants, it was just another Tuesday for me to stop their project.  To the extent that giving up this money is morally praiseworthy, I think the praise should accrue to the decisions that made giving up the money easy, rather than the actual donation.

This is not a popular belief. Most people’s view on charity is summed up by the biblical story of the widow’s mite, in which a poor widow giving up a small amount at great personal sacrifice is considered more virtuous than large donations from rich men. I can see the ways that’s appealing when trying to judge someone’s character. But even if we’re going to grade people on difficulty, we have to look further back than the last step. If the rich men worked hard and made sacrifices to achieve their wealth, and they chose to invest that money in helping others rather than yachts, that should be recognized (although of course this doesn’t justify hurting others to get that money; I’m talking only about personal sacrifice).

So I think people in my exact position have a strong obligation to give away leftover money from FTX. I think people in the related position of technically having unspent money but finding it too great a hardship to give back shouldn’t ruin their lives by doing so. But I encourage them to think about what they would need to change in their life to make ethical behavior easier

Thanks

Thanks to the Progress Studies Blog Building Initiative and everyone who argued with me for feedback on this post.

UPDATE 2025-10-17: donation sent

Bandwidth Rules Everything Around Me: Oliver Habryka on OpenPhil and GoodVentures

In this episode of our podcast, Timothy Telleen-Lawton and I talk to Oliver Habryka of Lightcone Infrastructure about his thoughts on the Open Philanthropy Project, which he believes has become stifled by the PR demands of its primary funder, Good Ventures.

Oliver’s main claim is that around mid 2023 or early 2024, Good Ventures founder Dustin Moskovitz became more concerned about his reputation, and this put a straight jacket over what Open Phil could fund. Moreover it was not enough for a project to be good and pose low reputational risk; it had to be obviously low reputational risk, because OP employees didn’t have enough communication with Good Ventures to pitch exceptions.  According to Habryka.

That’s a big caveat. This podcast is pretty one sided, which none of us are happy about (Habryka included). We of course invited OpenPhil to send a representative to record their own episode, but they declined (they did send a written response to this episode, which is linked below and read at end of the episode). If anyone out there wants to asynchronously argue with Habryka on a separate episode, we’d love to hear from you. 

Transcript available here.

Links from the episode:

An Update From Good Ventures (note: Dustin has deleted his account and his comments are listed as anonymous, but are not the only anonymous)

CEA announcing the sale of Wytham Abbey

OpenPhli career page

Job reporting to Amy WL

Zach’s “this is false”

Luke Muelhauser on GV not funding right of center work

Will MacAskill on decentralization and EA

Alexander Berger regrets the Wytham Abbey grant

Single Chan-Zuckerberg employee demanding resignation over failure to moderate Trump posts on Facebook

Letter from 70+ CZ employees asking for more DEI within Chan Zuckerberg Initiative.

OpenPhil’s response

Austin Chen on Winning, Risk-Taking, and FTX

Timothy and I have recorded a new episode of our podcast with Austin Chen of Manifund (formerly of Manifold, behind the scenes at Manifest).

The start of the conversation was contrasting each of our North Stars- Winning (Austin), Truthseeking (me), and Flow (Timothy), but I think the actual theme might be “what is an acceptable amount of risk taking?” We eventually got into a discussion of Sam Bankman-Fried, where Austin very bravely shared his position that SBF has been unwisely demonized and should be “freed and put back to work”. He by no means convinced me or Timothy of this, but I deeply appreciate the chance for a public debate.

Episode:

Transcript (this time with filler words removed by AI)

Editing policy: we allow guests (and hosts) to redact things they said, on the theory that this is no worse than not saying them in the first place. We aspire but don’t guarantee to note serious redactions in the recording. I also edit for interest and time. 

Can we rescue Effective Altruism?

Last year Timothy Telleen-Lawton and I recorded a podcast episode talking about why I quit Effective Altruism and thought he should too. This week we have a new episode, talking about what he sees in Effective Altruism and the start of a road map for rescuing it. 

Audio recording

Transcript

Thanks to everyone who listened to the last one, and especially our Manifund donors, my Patreon patrons, and the EAIF for funding our work.

Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now)

~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit. 

Audio recording

Transcript

Some highlights:

Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.

Thanks to my Patreon patrons for supporting my part of this work. You can support future work by joining Patreon, or contribute directly (and tax-deductible-y) to this project on Manifund.

Grant Making and Grand Narratives

Another inside baseball EA post

The Lightspeed application asks:  “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”

LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”). 

I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail.  But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I  believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.

I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).

I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.

My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start. 
 

More on the costs of the question

Pushes away the most motivated people

Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad. 

[*Although there are also downsides to organizers with sufficiently bad epistemics.]

Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
 

Vulnerable to grift

You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems. 

 

Punishes underconfidence

Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated. 

Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
 

Corrupts epistemics

Not that much. But I think it’s pretty bad if people are forced to choose between “play the game of exaggerating impact” and “go unfunded”. Even if the game is in fact learnable, it’s a bad use of their time and weakens the barriers to lying in the future. 

Pushes projects to grow beyond their ideal scope

Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.  

This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
 

Rewards cultural knowledge independent of merit

There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals. 

Brainstorming fixes

I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of. 
 

  • Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption. 
  • Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
    • E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
    • This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
    • I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope. 
    • Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed. 
  • Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
    • My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
  • Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
    • This is tricky because grants don’t necessarily mean a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable. 
    • E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism. 
  • Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
  • When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are.  You can have an optional question asking for the effects of meeting the goal.
    • I like the word “effects” more than “impact”, which is a pretty loaded term within EA and x-risk. 
  • A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that. 
  • Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with. 
  • [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
  • [Your idea here]



 

I hope the forms change to explicitly encourage things like the above list, but  I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).   

Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I’m also very interested in why you like the current forms, and what constraints shaped them.

Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am. 

EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem

Introduction

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its reference category, and unjustified in that it is far from meeting its own standards. We’ve already seen dire consequences of the inability to detect bad actors who deflect investigation into potential problems, but by its nature you can never be sure you’ve found all the damage done by epistemic obfuscation because the point is to be self-cloaking. 

My concern here is for the underlying dynamics of  EA’s weak epistemic immune system, not any one instance. But we can’t analyze the problem without real examples, so individual instances need to be talked about. Worse, the examples that are easiest to understand are almost by definition the smallest problems, which makes any scapegoating extra unfair. So don’t.

This post focuses on a single example: vegan advocacy, especially around nutrition. I believe vegan advocacy as a cause has both actively lied and raised the cost for truthseeking, because they were afraid of the consequences of honest investigations. Occasionally there’s a consciously bad actor I can just point to, but mostly this is an emergent phenomenon from people who mean well, and have done good work in other areas. That’s why scapegoating won’t solve the problem: we need something systemic. 

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

Definitions

I picked the words “vegan advocacy” really specifically. “Vegan” sometimes refers to advocacy and sometimes to just a plant-exclusive diet, so I added “advocacy” to make it clear.

I chose “advocacy” over “advocates” for most statements because this is a problem with the system. Some vegan advocates are net truthseeking and I hate to impugn them. Others would like to be epistemically virtuous but end up doing harm due to being embedded in an epistemically uncooperative system. Very few people are sitting on a throne of plant-based imitation skulls twirling their mustache thinking about how they’ll fuck up the epistemic commons today. 

When I call for actions I say “advocates” and not “advocacy” because actions are taken by people, even if none of them bear much individual responsibility for the problem. 

I specify “EA vegan advocacy” and not just “vegan advocacy” not because I think mainstream vegan advocacy is better, but because 1. I don’t have time to go after every wrong advocacy group in the world. 2. Advocates within Effective Altruism opted into a higher standard. EA has a right and responsibility to maintain the standards of truth it advocates, even if the rest of the world is too far gone to worry about. 

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

How EA vegan advocacy has hindered truthseeking

EA vegan advocacy has both pushed falsehoods and punished people for investigating questions it doesn’t like. It manages this even for positions that 90%+ of effective altruism and the rest of the world agree with, like “veganism is a constraint”. I don’t believe its arguments convince anyone directly, but end up having a big impact by making inconvenient beliefs too costly to discuss. This means new entrants to EA are denied half of the argument, and harm themselves due to ignorance.

This section outlines the techniques I’m best able to name and demonstrate. For each technique I’ve included examples. Comments on my own posts are heavily overrepresented because they’re the easiest to find; “go searching through posts on veganism to find the worst examples” didn’t feel like good practice. I did my best to quote and summarize accurately, although I made no attempt to use a representative sample. I think this is fair because a lot of the problem lies in the fact that good comments don’t cancel out bad, especially when the good comments are made in parallel rather than directly arguing with the bad. I’ve linked to the source of every quote and screen shot, so you can (and should) decide for yourself. I’ve also created a list of all of my own posts I’m drawing from, so you can get a holistic view. 

My posts:

I should note I quote some commenters and even a few individual comments in more than one section, because they exhibit more than one problem. But if I refer to the same comment multiple times in a row I usually only link to it once, to avoid implying more sources than I have. 

My posts were posted on my blog, LessWrong, and EAForum. In practice the comments I drew from came from LessWrong (white background) and EAForum (black background).  I tried to go through those posts and remove all my votes on comments (except the automatic vote for my own comments) so that you could get an honest view of how the community voted without my thumb on the scale, but I’ve probably missed some, especially on older posts. On the main posts, which received a lot of traffic, I stuck to well-upvoted comments, but I included some low (but still positive) karma comments from unpopular posts. 

The goal here is to make these anti-truthseeking techniques legible for discussion, not develop complicated ways to say “I don’t like this”, so when available I’ve included counter examples. These are comments that look similar to the ones I’m complaining about, but are fine or at least not suffering from the particular flaw in that section. In doing this I hope to keep the techniques’ definitions narrow.

Active suppression of inconvenient questions

A small but loud subset of vegan advocacy will say outright you shouldn’t say true things, because it leads to outcomes they dislike. This accusation is even harsher than “not truthseeking”, and would normally be very hard to prove. If I say “you’re saying that because you care more about creating vegans than the health of those you create”, and they say “no I’m not”, I don’t really have a come back. I can demonstrate that they’re wrong, but not their motivation. Luckily, a few people said the quiet part out loud. 

Commenter Martin Soto pushed back very hard on my first nutrition testing study. Finally I asked him outright if he thought it was okay to share true information about vegan nutrition. His response was quite thoughtful and long, so you should really go read the whole thing, but let me share two quotes

He goes on to say:

And in a later comment

EDIT 2023-10-03: Martin disputes my summary of his comments. I think it’s good practice to link to disputes like this, even though I stand by my summary. I also want to give a heads-up that I see his comments in the dispute thread as continuing the patterns I describe (which makes that thread a tax on the reader). If you want to dig into this, I strongly suggest you first read his original comments and come up with your own summary, so you can compare that to each of ours.

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition. There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”. Most people flinch away from explicit trade-offs like that, and I appreciate that he did them and own the conclusion. But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

EDIT 2023-10-04: . Using Faunalytics numbers for self-reported health issues and improvements after quitting veg*nism, I calculated that 20% of veg*ns develop health issues. This number is sensitive to your assumptions; I consider 20% conservative but it could be an overestimate. I encourage you to read the whole post and play with my model, and of course read the original work.  

Most people aren’t nearly this upfront. They will go through the motions of calling an idea incorrect before emphasizing how it will lead to outcomes they dislike. But the net effect is a suppression of the exploration of ideas they find inconvenient. 

This post on Facebook is a good example. Normally I would consider facebook posts out of bounds, especially ones this old (over five years). Facebook is a casual space and I want people to be able to explore ideas without being worried that they’re creating a permanent record that will be used against them.  In this case I felt that because the post was permissioned to public and a considered statement (rather than an off the cuff reply), the truth value outweighed the chilling effect. But because it’s so old and I don’t know how the author’s current opinion, I’m leaving out their name and not linking to the post. 

The author is midlist EA- I’d heard of them for other reasons, but they’re certainly not EA-famous. 

There are posts very similar to this one I would have been fine with, maybe even joyful about. You could present evidence against the claims that X is harmful, or push people to verify things before repeating them, or suggest we reserve the word poison for actual kill-you-dead molecules and not complicated compound constructions with many good parts and only weak evidence of mild long-term negative effects. But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you’d expect if the health claims were true but inconvenient. I don’t know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

A subtler version comes from the AHS-2 post. At the time of this comment the author, Rockwell, described herself as the leader of EA NYC and an advisor to philanthropists on animal suffering, so this isn’t some rando having a feeling. This person has some authority.

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient. And if they’d written that counter-argument they promised I’d be putting this in the counter-examples section. But it’s been three months and they have not written anything where I can find it, nor responded to my inquiries. So even if literal claim were correct, she’s using a technique whose efficacy is independent of truth. 

Over on the Change My Mind post the top comment says that vegan advocacy is fine because it’s no worse than fast food or breakfast cereal ads

I’m surprised someone would make this comment. But what really shocks me is the complete lack of pushback from other vegan advocates. If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them. 

Counter-Examples

This post on EAForum came out while I was finishing this post. The author asks if they should abstain from giving bad reviews to vegan restaurants, because it might lead to more animal consumption- which would be a central example of my complaint. But the comments are overwhelmingly “no, there’s not even a good consequentialist argument for that”, and the author appears to be taking that to heart. So from my perspective this is a success story.

Ignore the arguments people are actually making

I’ve experienced this pattern way too often.

Me: goes out of my way to say not-X in a post
Comment: how dare you say X! X is so wrong!
Me: here’s where I explicitly say not-X.
*crickets*

This is by no means unique to posts about veganism. “They’re yelling at me for an argument I didn’t make” is a common complaint of mine. But it happens so often, and so explicitly, in the vegan nutrition posts. Let me give some examples.

My post:

Commenter:

My post:

Commenters:

My post:

Commenter: 

My post: 

Commenter:

My post:

Commenter:

You might be thinking “well those posts were very long and honestly kind of boring, it would be unreasonable to expect people to read everything”. But the length and precision are themselves a response to people arguing with positions I don’t hold (and failing to update when I clarify). The only things I can do are spell out all of my beliefs or not spell out all of my beliefs, and either way ends with comments arguing against views I don’t have. 

Frame control/strong implications not defended/fuzziness

This is the hardest one to describe. Sometimes people say things, and I disagree, and we can hope to clarify that disagreement. But sometimes people say things and responding is like nailing jello to a wall. Their claims aren’t explicit, or they’re individually explicit but aren’t internally consistent, or play games with definitions. They “counter” statements in ways that might score a point in debate club but don’t address the actual concern in context. 

One example is the top-voted comment on LW on Change My Mind

Over a very long exchange I attempt to nail down his position: 

  • Does he think micronutrient deficiencies don’t exist? No, he agrees they do.
  • Does he think that they can’t cause health issues? No, he agrees they do.
  • Does he think this just doesn’t happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

So what exactly does he disagree with me on? 

He also had a very interesting exchange with another commenter. That thread got quite long, and fuzziness by its nature doesn’t lend itself to excerpts, so you should read the whole thing, but I will share highlights. 

Before the screenshot: Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it’s fine because if people get tired they can go to a doctor.

That reply doesn’t contain any false statements, and would be perfectly reasonable if we were talking about ER triage protocols. But it’s irrelevant when the conversation is “can we count on veganism-induced fatigue being caught?”. (The answer is no, and only some of the reasons have been brought up here)

You can see how the rest of this conversation worked out in the Sound and Fury section.

A much, much milder example can be seen in What vegan food resources have you found useful?. This was my attempt to create something uncontroversially useful, and I’d call it a modest success. The post had 20-something karma on LW and EAForum, and there were several useful-looking resources shared on EAForum. But it also got the following comment on LW: 

I picked this example because it only takes a little bit of thought to see the jujitsu, so little it barely counts. He disagreed with my implicit claim that… well okay here’s the problem. I’m still not quite sure where he disagrees. Does he think everyone automatically eats well as a vegan? That no one will benefit from resources like veganhealth.org? That no one will benefit from a cheat sheet for vegan party spreads? That there is no one for whom veganism is challenging? He can’t mean that last one because he acknowledges exceptions in his later comment, but only because I pushed back. Maybe he thinks that the only vegans who don’t follow his steps are those with medical issues, and that no-processed-food diets are too unpopular to consider? 

I don’t think this was deliberately anti-truthseeking, because if it was he would have stopped at “nothing special” instead of immediately outlining the special things his partner does. That was fairly epistemically cooperative. But it is still an example of strong claims made only implicitly. 

Counter-Examples

I think this comment makes a claim (“vegans moving to naive omnivorism will hurt themselves”) clearly, and backs it up with a lot of details.

The tone is kind of obnoxious and he’s arguing with something I never claimed, but his beliefs are quite clear. I can immediately understand which beliefs of his I agree with (“vegans moving to naive omnivorism will hurt themselves” and “that would be bad”) and make good guesses at implicit claims I disagree with (“and therefore we should let people hurt themselves with naive veganism”? “I [Elizabeth] wouldn’t treat naive mass conversion to omnivorism seriously as a problem”?). That’s enough to count as epistemically cooperative.

Sound and fury, signifying no substantial disagreement 

Sometimes someone comments with an intense, strongly worded, perhaps actively hostile, disagreement. After a laborious back and forth, the problem dissolves: they acknowledge I never held the position they were arguing with, or they don’t actually disagree with my specific claims. 

Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.

The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.

I do feel like vegan advocates are entitled to a certain amount of defensiveness. They encounter large amounts of concern trolling and outright hostility, and it makes sense that that colors their interactions. But that allowance covers one comment, maybe two, not three to eight (Wilkox, depending on which ones you count). 

For example, I’ve already quoted Wilkox’s very fuzzy comment (reminder: this was the top voted comment on that post on LW). That was followed by a 13+ comment exchange in which we eventually found he had little disagreement with any of my claims about vegan nutrition, only the importance of these facts. There really isn’t a way for me to screenshot this: the length and lack of specifics is the point.

You could say that the confusion stemmed from poor writing on my part, but:

I really appreciate the meta-honesty here, but since the exchange appears to have eaten hours of both of our time just to dig ourselves out of a hole, I can’t get that excited about it. 

Counter-Examples

I want to explicitly note that Sound and Fury isn’t the same as asking questions or not understanding a post. E.g. here Ben West identifies a confusion, asks me, and accepts both my answer and an explanation of why answering is difficult. 

Or in that same post, someone asked me to define nutritionally dense. It took a bit for me to answer and we still disagreed afterward, but it was a great question and the exchange felt highly truthseeking.  

Bad sources, badly handled 

Citations should be something of a bet: if the citation (the source itself or your summary of it) is high quality and supports your point, that should move people closer to your views. But if they identify serious relevant flaws, that should move both you and your audience closer to their point of view. Of course our beliefs are based on a lot of sources and it’s not feasible or desirable to really dig into all of them for every disagreement, so the bet may be very small. But if you’re not willing to defend a citation, you shouldn’t make it.

What I see in EA vegan advocacy is deeply terrible citations, thrown out casually, and abandoned when inconvenient. I’ve made something of a name for myself checking citations and otherwise investigating factual claims from works of nonfiction. Of everything I’ve investigated, I think citations from EA vegan advocacy have the worst effort:truth ratio. Not outright more falsehoods, I read some pretty woo stuff, but those can be dismissed quickly. Citations in vegan advocacy are often revealed to be terrible only after great effort.

And having put in that effort, my reward is usually either crickets, or a new terrible citation. Sometimes we will eventually drill into “I just believe it”, which is honestly fine. We don’t live our lives to the standard of academic papers. But if that’s your reason, you need to state it from the beginning. 

For example, in the top voted comment on Change My Mind post on EAF,  Rockwell (head of EA NYC) has five links in her post. Only links 1 and 4 are problems, but I’ll describe them in order to avoid confusion.

Of the five links: 

  1. Wilkox’s comment on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn’t happened at the time of linking).
  2. cites my past work, if anything too generously.
  3. an estimation of nutrient deficiency in the US. I don’t love that this uses dietary intake as opposed to testing values (people’s needs vary so wildly), but at least it used EAR and not RDA. I’d want more from a post but for a comment this is fine.
  4. an absolutely atrocious article, which the comment further misrepresents. We don’t have time to get all the flaws in that article, so I’ve put my first hour of criticisms in the appendix. What really gets me here is that I would have agreed the standard American diet sucks without asking for a source. I thought I had conceded that point preemptively, albeit not naming Standard American Diet explicitly.

    And if she did feel a need go the extra mile on rigor for this comment, it’s really not that hard to find decent-looking research about the harms of the Standard Shitty American Diet.  I found this paper on heart disease in 30 seconds, and most of that time was spent waiting for Elicit to load. I don’t know if it’s actually good, but it is not so obviously farcical as the cited paper.
  5. The fifth link goes to a description of the Standard American Diet. 

Rockwell did not respond to my initial reply (that fixing vegan issues is easier than fixing SSAD), or my asking if that paper on the risks of meat eating was her favorite.

A much more time-consuming version of this happened with Adventist Health Study-2. Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism). There’s one commenter on LessWrong and two on EAForum (one of whom had previously co-authored a blog post on the study and offered to answer questions). As I discussed here, that study is one of the best we have on nutrition and I’m very glad people brought it to my attention. But calling it a pseudo-RCT that supports veganism is deeply misleading. It is nowhere near randomized, and doesn’t cleanly support veganism even if you pretend it is.

(EDIT 2023-10-03: To be clear, the noise in the study overwhelms most differences in outcomes, even ignoring the self-sorting. My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported. One commenter has said she only meant it as evidence that a vegan diet can work for some people, which I agree with, as stated in the post she was responding to. She disagrees with other parts of my summary as well, you can read more here)

It’s been three months, and none of the recommenders have responded to my analysis of the main AHS-2 paper, despite repeated requests. 

But finding a paper is of lower quality and supports an entirely different conclusion is still not the worst-case scenario. The worst outcome is citation whack-a-mole.

A good example of this is from the post “Getting Cats Vegan is Possible and Imperative”, by Karthik Sekar. Karthik is a vegan author and data scientist at a plant-based meat company. 

[Note that I didn’t zero out my votes on this post’s comments, because it seemed less important for posts I didn’t write]

Karthik cites a lot of sources in that post. I picked what looked like his strongest source and investigated. It was terrible. It was a review article, so checking it required reading multiple studies. Of the cited studies, only 4  (with a total of 39 combined subjects) use blood tests rather than owner reports, and more than half of those were given vegetarian diets, not vegan (even though the table header says vegan). The only RCT didn’t include carnivorous diets. 

Karthik agrees that paper (that he cited) is not making its case “strong nor clear”, and cites another one (which AFAICT was not in the original post).

I dismiss the new citation on the basis of “motivated [study] population and minimal reporting”. 

He retreats to “[My] argument isn’t solely based on the survey data. It’s supported by fundamentals of biochemistry, metabolism, and digestion too […] Mammals such as cats will digest food matter into constituent molecules. Those molecules are chemically converted to other molecules–collectively, metabolism–, and energy and biomass (muscles, bones) are built from those precursors. For cats to truly be obligate carnivores, there would have to be something exceptional about meat: (A) There would have to be essential molecules–nutrients–that cannot be sourced anywhere else OR (B) the meat would have to be digestible in a way that’s not possible with plant matter. […So any plant-based food that passes AAFCO guidelines is nutritionally complete for cats. Ami does, for example.]

I point out that AAFCO doesn’t think meeting their guidelines is necessarily sufficient. I expected him to dismiss this as corporate ass-covering, and there’s a good chance he’d be right. But he didn’t.

Finally, he gets to his real position:

Which would have been a fine aspirational statement, but then why include so many papers he wasn’t willing to stand behind? 

On that same post someone else says that they think my concerns are a big deal, and Karthik probably can’t convince them without convincing me. Karthik responds:

So he’s conceded that his study didn’t show what he claimed. And he’s not really defending the AAFCO standards. But he’s really sure this will work anyway? And I’m the one who won’t update their beliefs. 

In a different comment the same someone else notes a weird incongruity in the paper. Karthik doesn’t respond.

This is the real risk of the bad sources: hours of deep intellectual work to discover that his argument boils down to a theoretical claim the author could have stated at the beginning. “I believe vegan cat food meets these numbers and meeting these numbers is sufficient” honestly  isn’t a terrible argument, and I’d have respected it plainly stated, especially since he explicitly calls for RCTs. Or I would, if he didn’t view those RCTs primarily as a means to prove what he already knows.  

Counter-Examples

This commenter starts out pretty similarly to the others, with a very limited paper implied to have very big implications. But when I push back on the serious limitations of the study, he owns the issues and says he only ever meant the paper to support a more modest claim (while still believing the big claim he did make?). 

Taxing Facebook

When I joined EA Facebook in 2014, it was absolutely hopping. Every week I met new people and had great discussions with them where we both walked away smarter. I’m unclear when this trailed off because I was drifting away from EA at the same time, but let’s say the golden age was definitively over by 2018. Facebook was where I first noticed the pattern with EA vegan advocacy. 

Back in 2014 or 2015, Seattle EA watched some horrifying factory farming documentaries, and we were each considering how we should change our diets in light of that new information. We tried to continue the discussion on Facebook, only to have Jacy Reese Anthis (who was not a member of the local group and AFAIK had never been to Seattle) repeatedly insist that the only acceptable compromise was vegetarianism, humane meat doesn’t exist, and he hadn’t heard of health conditions benefiting from animal products so my doctor was wrong (or maybe I made it up?). 

I wish I could share screenshots on this, but the comments are gone (I think because the account has been deleted). I’ve included shots of the post and some of my comments (one of which refers to Jacy obstructing an earlier conversation, which I’d forgotten about). A third commenter has been cropped out, but I promise it doesn’t change the context.

(his answer was no, and that either I or my doctor were wrong because Jacy had never heard of any medical issue requiring consumption of animal products)

That conversation went okay. Seattle EA discussed suffering math on different vertebrates, someone brought up eating bugs, Brian Tomasik argued against eating bugs. It was everything an EA conversation should be.

But it never happened again.

Because this kind of thing happened every time animal products, diet, and health came up anywhere on EA Facebook. The commenters weren’t always as aggressive as Jacy, but they added a tremendous amount of cumulative friction. An omnivore would ask if lacto-vegetarianism worked, and the discussion would get derailed by animal advocates insisting you didn’t need milk.  Conversations about feeling hungry at EAG inevitably got a bunch of commenters saying they were fine, as if that was a rebuttal. 

Jeff Kaufman mirrors his FB posts onto his actual blog, which makes me feel more okay linking to it. In this post he makes a pretty clear point- that veganism can be any of cheaper, or healthier, or tastier, but not all at once.  He gets a lot of arguments. One person argues that no one thinks that, they just care about animals more. 

One vegetarian says they’d like to go vegan but just can’t beat eggs for their mix of convenience, price, macronutrients, and micronutrients. She gets a lot of suggestions for substitutes, all of which flunk on at least one criterion.  Jacy Reese Anthis has a deleted comment, which from the reply looks like he asserted the existence of a substitute without listing one. 

After a year or two of this, people just stopped talking about anything except the vegan party line on public FB. We’d bitch to each other in private, but that was it. And that’s why, when a new generation of people joined EA and were exposed to the moral argument for veganism, there was no discussion of the practicalities visible to them. 

[TBF they probably wouldn’t have seen the conversations on FB anyway, I’m told that’s an old-person thing now. But the silence has extended itself]

Ignoring known falsehoods until they’re a PR problem

This is old news, but: for many years ACE said leafletting was great. Lots of people (including me and some friends, in 2015) criticized their numbers. This did not seem to have much effect; they’d agree their eval was imperfect and they intended to put up a disclaimer, but it never happened.

In late 2016 a scathing anti-animal-EA piece was published on Medium, making many incendiary accusations, including that the leafleting numbers are made up. I wouldn’t call that post very epistemically virtuous; it was clearly hoping to inflame more than inform. But within a few weeks (months?), ACE put up a disavowal of the leafletting numbers.

I unfortunately can’t look up the original correction or when they put it up; archive.org behaves very weirdly around animalcharityevaluators.org. As I remember it made the page less obviously false, but the disavowal was tepid and not a real fix. Here’s the 2022 version:

There are two options here: ACE was right about leafleting, and caved to public pressure rather than defend their beliefs. Or ACE was wrong about leafleting (and knew they were wrong, because they conceded in private when challenged) but continued to publicly endorse it.

Why I Care

I’ve thought vegan advocates were advocating falsehoods and stifling truthseeking for years. I never bothered to write it up, and generally avoided public discussion, because that sounded like a lot of work for absolutely no benefit. Obviously I wasn’t going to convince the advocates of anything, because finding the truth wasn’t their goal, and everyone else knew it so what did it matter? I was annoyed at them on principle for being wrong and controlling public discussion with unfair means, but there are so many wrong people in the world and I had a lot on my plate. 

I should have cared more about the principle.

I’ve talked before about the young Effective Altruists who converted to veganism with no thought for nutrition, some of whom suffered for it. They trusted effective altruism to have properly screened arguments and tell them what they needed to know. After my posts went up I started getting emails from older EAs who weren’t getting the proper testing either; I didn’t know because I didn’t talk to them in private, and we couldn’t discuss it in public. 

Which is the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed. 

What do EA vegan advocates need to do?

  1. Acknowledge that nutrition is a multidimensional problem, that veganism is a constraint, and that adding constraints usually makes problems harder, especially if you’re already under several.
  2. Take responsibility for the nutritional education of vegans you create. This is not just an obligation, it’s an opportunity to improve the lives of people who are on your side. If you genuinely believe veganism can be nutritionally whole, then every person doing it poorly is suffering for your shared cause for no reason.
    1. You don’t even have to single out veganism. For purposes of this point I’ll accept “All diet switches have transition costs and veganism is no different, and the long term benefits more than compensate”. I don’t think your certainty is merited, and I’ll respect you more if you express uncertainty, but I understand that some situations require short messaging and am willing to allow this compression.
  3. Be epistemically cooperative, at least within EA spaces. I realize this is a big ask because in the larger world people are often epistemically uncooperative towards you. But obfuscation is a symmetric weapon and anger is not a reason to believe someone. Let’s deescalate this arms race and have both sides be more truthseeking.

    What does epistemic cooperation mean?
    1. Epistemic legibility. Make your claims and cruxes clear. E.g. “I don’t believe iron deficiency is a problem because everyone knows to take supplements and they always work” instead of “Why are you bothering about iron supplements?”
    2. Respond to the arguments people actually make, or say why you’re not. Don’t project arguments from one context onto someone else. I realize this one is a big ask, and you have my blessing to go meta and ask work from the other party to make this viable, as long as it’s done explicitly. 
    3. Stop categorically dismissing omnivores’ self-reports. I’m sure many people do overestimate the difficulties of veganism, but that doesn’t mean it’s easy or even possible for everyone.
      1. A scientific study, no matter how good, does not override a specific person telling you they felt hungry at a specific time. 
    4. If someone makes a good argument or disproves your source, update accordingly. 
  4. Police your own. If someone makes a false claim or bad citation while advocating veganism, point it out. If someone dismisses a detailed self-report of a failed attempt at veganism, push back. 

All Effective Altruists need to stand up for our epistemic commons

Effective Altruism is supposed to mean using evidence and reason to do the most good. A necessary component of that is accurate evidence. All the spreadsheets and moral math in the world mean nothing if the input is corrupted. There can be no consequentialist argument for lying to yourself or allies1 because without truth you can’t make accurate utility calculations2. Garbage in, garbage out.

One of EA’s biggest assets is an environment that rewards truthseeking more than average. Without uniquely strong truthseeking, EA is just another movement of people who are sure they’re right. But high truthseeking environments are fragile, exploiting them is rewarding, and the costs of violating them are distributed and hard to measure. The only way EA’s has a chance of persisting is if the community makes preserving it a priority. Even when it’s hard, even when it makes people unhappy, and even when the short term rewards of defection are high. 

How do we do that? I wish I had a good answer. The problem is complicated and hard to reason about, and I don’t think we understand it enough to fix it. Thus far I’ve focused on vegan advocacy as a case study in destruction of the epistemic commons because its operations are relatively unsophisticated and easy to understand. Next post I’ll be giving more examples from across EA, but those will still have a bias towards legibility and visibility. The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet. 


Acknowledgments

Thanks to the many people I’ve discussed this with over the past few months. 

Thanks to Patrick LaVictoire and Aric Floyd for beta reading this post.

Thanks to Lightspeed Grants for funding this work. Note: a previous post referred to my work on nutrition and epistemics as unpaid after a certain point. That was true at the time and I had no reason to believe it wouldn’t stay true, but Lightspeed launched a week after that post and was an unusually good fit so I applied. I haven’t received a check yet but they have committed to the grant so I think it’s fair to count this as paid. 

Appendix

Terrible anti-meat article

  • The body of the paper is an argument between two people, but the abstract only includes the anti-animal-product side.
  • The “saturated fat” and “cholesterol” sections take as a given that any amount of these is bad, without quantifying or saying why. 
  • The “heme iron” section does explain why excess iron is bad, but ignores the risks of too little. Maybe he also forgot women exist? 
  • The lactose section does cite two papers, one of which does not support his claim, and the other of which is focused on mice who received transplants. It probably has a bunch of problems but it was too much work to check, and even if it doesn’t, it’s about a niche group of mice. 
  • The next section claims milk contains estrogen and thus raises circulating estrogen, which increases cancer risk.
    • It cites one paper supporting a link with breast cancer. That paper found a correlation with high fat but not low fat dairy, and the correlation was not statistically significant. 
    • It cites another paper saying dairy impairs sperm quality. This study was done at a fertility clinic, so will miss men with healthy sperm counts and is thus worthless. Ignoring that, it found a correlation of dairy fat with low sperm count, but low-fat dairy was associated with higher sperm count. Again, among men with impaired fertility. 
  • The “feces” section says that raw meat contains harmful bacteria (true), but nothing about how that translates to the risks of cooked meat.

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.] 

  1. ”Are you saying I can’t lie to Nazis about the contents of my attic?” No more so than you’re banned from murdering them or slashing their tires. Like, you should probably think hard about how it fits into your strategy, but I assumed “yourself or allies” excluded Nazis for everyone reading this. 

    “Doesn’t that make the definition of enemies extremely morally load bearing?” It reflects that fact, yes. 

    “So vegan advocates can morally lie as long as it’s to people they consider enemies?”  I think this is, at a minimum, defensible and morally consistent. In some cases I think it’s admirable, such as lying to get access to a slaughterhouse in order to take horrifying videos. It’s a declaration of war, but I assume vegan advocates are proud to declare the meat industry their enemy. ↩︎
  2. I’ll allow that it’s conceptually possible to make deontological or virtue ethics arguments for lying to yourself or allies, but it’s difficult, and the arguments are narrow and/or wrong. Accurate beliefs turn out to be critical to getting good outcomes in all kinds of situations.  ↩︎

Edits

You will notice a few edits in this post, which are marked with the edit date. The original text is struck through.

When I initially published this post on 2023-09-28, several images failed to copy over from the google doc to the shitty WordPress editor. These were fixed within a few hours.

I tried to link to sources for every screenshot (except the Facebook ones). On 2023-10-05 I realized that a lot of the links were missing (but not all, which is weird) and manually added them back in. In the process I found two screenshots that never had links, even in the google doc, and fixed those. Halfway through this process the already shitty editor flat out refused to add links to any more images. This post is apparently already too big for WordPress to handle, so every attempted action took at least 60 seconds, and I was constantly afraid I was going to make things worse, so for some images the link is in the surrounding text. 

If anyone knows of a blogging website that will gracefully accept cut and paste from google docs, please let me know. That is literally all an editor takes to be a success in my book and last time I checked I could not find a single site that managed it.

Seeing Like A State, Flashlights, and Giving This Year

Note (7/15/19): I’m no longer sure about Tostan as an organization. I would like to give more details on my current thinking, but they are hard to articulate and it seemed better to put up this disclaimer now than wait for my thinking to solidify.

Overview: The central premise of Seeing Like A State (James C. Scott, 1999) is that the larger an organization is, the less it can tolerate variation between parts of itself.  The subparts must become legible.  This has an extraordinary number of implications for modern life, but I would like to discuss the application to charity in particular.  I believe Tostan is pushing forward the art and science of helping people with problems that are not amenable to traditional RCTs, and recommend donating to them.  But before you do that, I recommend picking a day and a time to consider all of your options.

Legibility is easier to explain with examples, so let’s start with a few: 

  • 100 small farmers can learn their land intimately and optimize their planting and harvest down to the day, using crop varieties that do especially well for their soil or replenish nutrients it’s particularly poor in.  Large agribusinesses plant the same thing over thousands of acres on a tight schedule, making up the difference in chemical fertilizer and lowered expectations.
  • The endless mess of our judicial system, where mandatory sentencing ignores the facts of the case and ruins people’s lives, but judicial discretion seems to disproportionately ruin poor + minority lives.  
  • Nation-states want people to have last names and fixed addresses for ease of taxation, and will sometimes force the issue.
  • Money.  This is the whole point of money.

Legibility means it’s not enough to be good, you must be reliably, predictably good.*

I want to be clear that legibility and interchangeability aren’t bad.  For example, standardized industrial production of medications allows the FDA to evaluate studies more cleanly, and to guarantee the dosage and purity of your medication.  On the other hand, my pain medication comes in two doses, “still hurts some” and “side effects unacceptable”, and splitting the pills is dangerous.  

Let’s look at how this applies to altruism.  GiveWell’s claim to fame is demanding extremely rigorous evidence to make highly quantitative estimates of effectiveness. I believe they have done good work on this, if only because it is so easy to do harm that simply checking you’re having a positive effect is an improvement.  But rigor will tend to push you towards legibility.   

  • The more legible something is, the easier it is to prove its effectiveness.  Antibiotics are easy.  Long term dietary interventions are hard.
  • Legible things scale better/scaling imposes legibility.  There’s a long history of interventions with stunning pilots that fail to replicate.  This has a lot of possible explanations:
    • Survivorship bias
    • People who do pilots are a different set than people who do follow up implementations, and have a verve that isn’t captured by any procedure you can write down.
    • A brand new thing is more likely to be meeting an underserved need than a follow up.  Especially when most evidence is in the form of randomized control trials, where we implicitly treat the control group as the “do nothing group”.  There are moral and practical limits to our ability to enforce that, and the end result being that members of the “control group” for one study may be receiving many different interventions from other NGOs.  This is extremely useful if you are answering questions like “Would this particular Indian village benefit from another microfinance institution?”, but of uncertain value for “would this Tanzanian village that has no microfinance benefit from a microfinance institution?”
    • For more on this see Chris Blattman’s post on evaluating ideas, not programs, and James Heckman on Econtalk describing the limits of RCTs.

GiveWell is not necessarily doing the wrong thing here.  When you have $8b+ to distribute and staff time is your most limited resource, focusing on the things that do the most good per unit staff time is correct.

Meanwhile, I have a friend who volunteers at a charity that helps homeless families reestablish themselves in the rental market. This organization is not going to scale, at all. Families are identified individually, and while there are guidelines for choosing who to assist there’s a lot that’s not captured, and a worse social worker would produce worse results.  Their fundraising is really not going to scale; it’s incredibly labor intensive and done mostly within their synagogue, meaning it is drawing on a pool of communal good will with very limited room for expansion.

Theoretically, my friend might make a bigger difference stuffing envelopes for AMF than they do at this homelessness charity.  But they’re not going to stuff envelopes for AMF because that would be miserable.  They could work more at their job and donate the money, but even assuming a way to translate marginal time into more money, work is not necessarily overflowing with opportunities to express their special talents either.

Charities do not exist to give volunteers opportunities to sparkle.  But the human desire to autonomously do something one is good at is a resource that should not be wasted. It can turn uncompetitive uses of money into competitive ones.  It’s also a breeding ground for innovation.  GiveDirectly has done fantastically with very deliberate and efficient RCTs, but there are other kinds of interventions that are not as amenable to them.

One example is Medecins Sans Frontiers.  Leaving half of all Ebola outbreaks untreated in order to gather better data is not going to happen.  Even if it was, MSF is not practicing a single intervention, they’re making hundreds of choices every day.  85% of American clinical trials fail to retain “enough” patients to produce a meaningful result, and those are single interventions on a group that isn’t experiencing a simultaneous cholera epidemic and civil war.  MSF is simply not going to get as clean data as GiveDirectly.

This is more speculative, but I feel like the most legible interventions are using something up.  Charity Science: Health is producing very promising results with  SMS vaccine reminders in India, but that’s because the system already had some built in capacity to use that intervention (a ~working telephone infrastructure, a populace with phones, government health infrastructure, medical research that identified a vaccine, vaccine manufacture infrastructure… are you noticing a theme here?).  This is good.  This is extremely good.  Having that capacity and not using it was killing people.  But I don’t think that CS’s intervention style will create much new capacity.  For that you need inefficient, messy, special snowflake organizations.  This is weird because I also believe in iterative improvement much more than I believe in transformation and it seems like those should be opposed strategies, but on a gut level they feel aligned to me.

Coming at this from another angle: The printing press took centuries to show a macroeconomic impact of any kind (not just print or information related).  The mechanical loom had a strong and immediate impact on the economy, because the economy was already set up to take advantage of it.  And yet the printing press was the more important invention, because it eventually enabled so much more.  

I know of one charity that I am confident is building capacity: Tostan.  Tostan provides a three year alternative educational series to rural villages in West Africa.  The first 8 months are almost entirely about helping people articulate their dreams.  What do they want for their children? For their community?  Then there is some health stuff, and then two years teaching participants the skills they need to run a business (literacy, numeracy, cell phone usage, etc), while helping them think through what is in line with their values.

Until recently Tostan had very little formal data collection.  So why am I so confident they’re doing good work?  Well, for one, the Gates Foundation gave them a grant to measure the work and initial results are very promising, but before that there were other signs.

First, villages ask Tostan to come to them, and there is a waitlist.  Villages do receive seed money to start a business in their second year, but 6-9 hours of class/week + the cost of hosting their facilitator is kind of a long game. 

Second, Tostan has had a few very large successes in areas with almost no competitors.  In particular; female circumcision.  Tostan originally didn’t plan on touching the concept, because the history of western intervention in the subject is… poor.  It’s toxic and it erodes relationships between beneficiaries and the NGOS trying to help them, because people do not like being told that their cherished cultural tradition, which is necessary for their daughters to be accepted by the community and get good things in their life, is mutilating them, and western NGOs have a hard time discussing genital cutting as anything else.  But Tostan taught health, including things that touched on culture.  E.g. “If your baby’s head looks like this she is dehydrated and needs water with sugar and salt.  Even if they have diarrhea I know it seems weird to pump water into a baby that can’t keep it in, but this is what works.  Witch doctors are very good at what they do, but please save them for witch doctor problems.”  

And one day, someone asked about genital cutting.

[One of Tostan’s innovations is using the neutral term “female genital cutting”, as opposed to circumcision, which many people find to be minimizing, and mutilation, which others find inflammatory]

It’s obvious to us that cutting off a girl’s labia or clitoris with an unsterilized blade, and (depending on the culture) sewing them shut is going to have negative health consequences.  But if everyone in your village does it, you don’t have anything to compare it to.  Industrial Europeans accepted childbed fever as just a thing that happened despite having much more available counterevidence.*  So when Tostan answered their questions honestly- that it could lead to death or permanent pain at the time, and greatly increases the chances of further damage during childbirth- it was news.

The mothers who cut their daughters were not bad people.   If you didn’t know the costs, cutting was a loving decision.  But once these women knew, they couldn’t keep doing it, and they organized a press conference to say so.  To be clear, this was aided by Tostan but driven by the women themselves.

The press conference went… poorly.  A village deciding not to cut was better than a single mother deciding not to cut, but it wasn’t enough.  Intermarriage between villages was common and the village as a whole suffered reprisal.  In despair Tostan’s founder, Molly Melching, talked to Demba Diawara, a respected imam.  He explained the cultural issues to her, and that the only way end cutting was for many villages to end it at the same time.  So Tostan began helping women to organize mass refusals, and it worked.  So far almost 8000 villages in West Africa have declared an end to genital cutting, of which ~2000 come from villages that directly participated in Tostan classes (77% of villages that practice cutting that took part in Tostan), and ~6000 are villages adopted by the first set.

Coincidentally, at the same time Melching was testing this, Gerry Mackie, a graduate student, was researching footbinding in China and discovered it ended the exact same way; coordinated mass pledges to stop.  

This is not conclusive.  Maybe it’s luck that Melching’s method consistently ended female genital cutting where everyone else had failed, in a method that subsequently received historical validation.  But I believe in following lucky generals.

FGC is not the only issue Tostan believes it improves.  It believes it facilitates systemic change across the board, leading to better treatment of children, more independence for women, cleaner villages, and more economic prosperity.  But it doesn’t do every thing in every village, because each village’s needs are different, and because what they provide is responsive to what the community asks for.  So now you’re measuring 100 different axes, some of which take a long time to generate statistically significant data on (e.g. child marriage) some of which are intrinsically difficult to measure (women’s independence), and you can’t say ahead of time which axes you expect to change in a particular sample.  This is hard to measure, and not because Tostan is bad at measuring.  

That’s not to say they aren’t trying.  Thanks to a grant from the Gates Foundation, Tostan has begun before and after surveys to measure its effect.  In addition to the difficulties I mentioned above, it faces technical challenges, language issues, and the difficulty of getting honest answers about sensitive questions.  

There is a fallacy called the streetlight fallacy; it refers to looking for your keys under the lamppost, where there is light, rather than in the dark alley where you lost your keys.  The altruism equivalent is doing things that are legible, instead of following the need.  This is not categorically wrong- when it’s easy to do harm, it is correct to stay in areas where you’ll at least know if it happened.  But staying in the streetlight forever means leaving billions of people to suffer.

I believe Tostan is inventing flashlights so we can hunt for our keys in the woods.  It is hard, and it is harder to prove its effectiveness.  But ultimately it leads to the best outcomes for the world.  I am urging people to donate to Tostan for several reasons:

  1. To support a program that is object level doing a lot of good
  2. To support the development of flashlight technology that will help others do more good.
  3. To demonstrate to the warmest, fuzziest, most culturally respecting of charities that incorporating hard data will get them more support, not less.

The traditional thing to do right now to encourage you to donate would be a matching pledge.  But more than I want money moved to Tostan, I want a culture of thoughtful giving, and charity-specific matching erodes that*.  Probably its best feature is that it can overcome inertia, but it does that regardless of charity quality.  So instead, let me encourage you to put time on your calendar to decide how much and where you will donate.  Seriously, right now.  If you can’t choose a time, choose a time to choose a time.  For those with company matching and tax concerns, this is noticeably more useful if it happens before Christmas.

If you are feeling extra motivated consider hosting a donation decision day or giving game.  If you would like to publicize your event, contact me at elizabeth @ this domain and I will post it here and to any contacts I have in your city.  

I also encourage you to write up your thought process regardless of the outcome, including not donating, and including thought patterns that are very different from my own or from established orthodoxy.  For some examples, see my posts in 2014 and 2015.  I will write up a separate post with every one of these someone sends me, assuming I’m sent any at all, which is not guaranteed.

The other prosocial purpose of matching challenges is to demonstrate how important you think an organization is by spending your own money.  I am going to skip the middle man and announce my contribution now: $19,750, plus $19,750 in company matching*, for a total of $39,500  This is everything I plan on donating between now and the end of 2017.

*I have a theory that much of the misery of modern jobs is from a need to make your work legible to others, which by necessity means doing things that are expected of the position, even if you’re bad at or dislike them, and shaving off the bits that you are especially good at and other people aren’t.  You may not even be allowed to do the things you are best at, and if you are the rewards are muted because no one is in a position to notice and reward the success.  This is pretty much a recipe for making yourself miserable.  It made me miserable at a large programming house famous for treating its employees wonderfully.  I think that company’s reputation is overblown as an absolute measure, but is probably still fair on a relative one, so I can only imagine how awful working in fast food is.  This does not actually have a lot to do with the point of this essay and will probably be cut in the version that goes on Tostan’s blog, but it was too interesting not to include.

*Postpartum infections were common in births attended by a physician because washing your hands between an autopsy and a birth was considered peasant superstition.  Midwives, who followed the superstition, had a lower death rate.  This discovery languished in part because the doctor who discovered it was an asshole and no one wanted to listen to him, and that’s why I don’t allow myself to dismiss ideas from people just because I don’t like them.    

*Charity-neutral matching, like that done by many employers, mostly doesn’t, although I worry it does anchor people’s charity budgets.

*If you are wondering why the number is weird: I donated $250 to a giving game earlier this year.

Relationship disclosures:  

Tostan’s Director of Philanthropy, Suzanne Bowles, has provided assistance on this post, in the form of answering questions about Tostan and reviewing this document (although she did not have veto power).  Suzanne and I have a friendly relationship and she has made some professional introductions for me.

I have several close friends who work or have worked for GiveWell, some of whom provided comments on this essay.  

Thanks to Justis Mills for copy editing and Ben Hoffman for feedback on earlier drafts.

Review: King Leopold’s Ghost (Adam Hochschild)

King Leopold’s Ghost has the most compelling opening I have ever read

 

The beginnings of this story lie far back in time, and its reverberations still sound today. But for me a central incandescent moment,  one that illuminates long decades before and after, is a young man’s flash of moral recognition.

The year is 1897 or 1898.  Try to imagine him, briskly stepping off a cross-Channel steamer, a forceful, burly man in his mid-twenties, with a handlebar mustache.  He is confident and well spoken, but his British speech is without the polish of Eton or Oxford.  He is well dressed, but the clothes are not from Bond Street.  With an ailing mother and a wife and growing family to support, he is not the sort of person likely to get caught up in an idealistic cause.  His ideas are thoroughly conventional.  He looks-and is- every inch the sober, respectable business man.

Edmund Dene Morel is a trusted employee of a Liverpool shipping line.  A subsidiary of the company has the monopoly on all transport of cargo to and from the Congo Free State, as it is then called, the huge territory in central Africa that is the world’s only colony claimed by one man.  That man is King Leopold II of Belgium, a ruler much admired throughout Europe as a “philanthropic” monarch.  He has welcomed Christian missionaries to his new colony; his troops, it is said, have fought and defeated local slave-traders who preyed on the population; and for more than a decade European newspapers have praised him for investing his personal fortune in public works to benefit the Africans.

Because Morel speaks fluent French, his company sends him to Belgium every few weeks to supervise the loading and unloading of ships on the Congo run.  Although the officials he works with have been handling this shipping traffic for years without a second thought, Morel begins to notice things that unsettle him.  At the docks of the big port of Antwerp he sees his company’s ships arriving filled to the hatch covers with valuable cargoes of rubber and ivory.  But they case off their hawsers to steam back to the Congo, while military bands play on the pier and eager young men in uniform line the ships’ rails, what they carry is mostly army officers, firearms, and ammunition.  There is no trade going on here.  Little or nothing is being exchanged for the rubber and ivory.  As morel watches these riches streaming to Europe with almost no goods being sent to Africa to pay for them, he realizes there can be only one explanation for their source: slave labor.

Brought face to face with evil, Morel does not turn away.  Instead, what he sees determines the course of his life and course of an extraordinary movement, the first great international human rights movement of the twentieth century.  Seldom has one human being- impassioned, eloquent, blessed with brilliant organizing skills and nearly superhuman energy- managed almost single-handedly to put one subject on the world’s front pages for more than a decade.  Only a few years after standing on the docks of Antwerp, Edmund Morel would be at the White House, insisting to President Theodore Roosevelt that the United States had a special responsibility to do something about the Congo.  He would organize delegations to the British Foreign Office.  He would mobilize everyone from Booker T. Washington to Anatole France to the Archbishop of Canterbury to join his cause.  More than two hundred mass meetings to protest slave labor in the Congo would be held across the United Sates.  A larger number of gatherings in England- nearly three hundred a year at the crusade’s peak- would raw as many as five thousand people at a time.  In London, one letter of protest to the Times on the Congo would be signed by eleven peers, nineteen bishops, seventy-six members of Parliament, the presidents of seven Chambers of Commerce, thirteen editors of major newspapers, and every lord mayor in the country.  Speeches about the horrors of King Leopold’s Congo would be given as far away as Australia.  In Italy, two men would fight a duel over the issue.  British Foreign Secretary Sir Edward Grey, a man not given to overstatement, would declare that “no external question for at least thirty years has moved the country so strongly and so vehemently.”

This is the story of that movement, of the savage crime that was its target, of the long period of exploration and conquest that preceded it, and of the way the world has forgotten one of the great mass killings of recent history.

This kind of thing is my heroism porn. Most movies are about people that set out to be heroes; they look at the costs and benefits and decide it is a trade off worth making.  That is great, and I don’t want to diminish it.  But they can build their lives around it, and that does reduce the costs.  What I find most affecting is people that were living ordinary lives who encounter something they cannot let stand, and don’t.  It was particularly touching in the case of Morel, who didn’t have to know what he knew.  Lots of people were on that dock and didn’t know or didn’t care.  He figured it out and switched tracks in his life when it would have been easy to pretend everything was okay. Everyone I talked to for the last two weeks heard how beautiful I found that.   I used the story to talk myself into doing things that were a little bit hard because they were so much less hard than what Morel did.

Here’s the story I told:  Under a humanitarian guise that fooled most Europeans at the time, Leopold created a form of slavery even worse than that of North America or even the Caribbean.  Men were worked to death attempting to free their wives and children from slavery.  Against that, Edward Morel and and increasing number of allies publicize the atrocities until Leopold backs down.  

This would be a really good story, and it’s what I thought was happening for most of the book, even while my knowledge that the modern Congo isn’t all sunshine and roses gnawed at me.  

In the last hour, it gets more complicated.     Yes, slavery went away and the rubber harvest (driver of much of the atrocities) declined.  But… the rubber decline could have been caused entirely by cultivated rubber farms coming online.  And Belgium may have stopped anything called slavery, they got about the same amount of financial value for about the same amount of violence out of their taxation system.  I realize the phrases “taxation is slavery” and “taxation is theft” are fairly loaded, but I think everyone can agree that people coming in from elsewhere to demand taxes and provide nothing of value to their subjects is Bad.  

And while there are the statistics that make the Congo look particularly bad, they’re mostly an artifact of size.  Per capita the other European powers in Africa were just as bad, and at the same time England (Morel’s home) was exterminating aborigines in Australia and America was going scorched earth on the Philippines (plus its usual policy towards American Indians).  

I could forgive Morel for advocating for a gentler form of colonialism.  People can only be so much better than their time, and a more correct person possibly couldn’t have accomplished as much because no one would listen to them. But my admiration for this man was very tied to the fact that he saw something he didn’t have to see, and chose to pursue it.  If he was blinding to himself to similar atrocities closer to home- especially when a great deal of African colonization, including Leopold’s rape of the Congo, was done under the guise of protecting Africans from Arab Slave Traders.

We don’t know Morel did nothing.  He went on to lead the pacifist movement against WW1, which was probably the right side but it’s even harder to argue he changed history for the better there.  But we don’t know he did something either.

This is a disappointing ending for a man I was well into planning how to get a Petrov-day style holiday.  He did better than average at seeing the horrors in front of him, but still not the ones that were done by his in-group.  It’s debatable if he accomplished anything.  He still sacrificed a lot, but I’m not prepared to valorize that alone.  It’s not even a good effective altruist cautionary tale because even with 100 years of hindsight it’s not clear what he could have done better.  Even focusing on Leopold’s horrors instead of England’s might have been the correct decision, since it let him gather stronger allies.

The book is beautifully written and read.  For whatever reason I was sadder and less callous listening to this than I am to most atrocities- maybe it was the writing, maybe because it was entirely new to me and I hadn’t had time to develop a shell.  And as heartbroken as I was to have my new hero brought down, I really admire the book for being willing to include that complexity when it could have gotten away with ignoring it.  So I can’t recommend it highly enough, assuming you want to be very sad.

Parenthetical Reference ends three or four undending debates in or at EA in a single stroke.

 

…it’s the difference between “tzedakah”, which is a mitzvah/dedication I have to making the world better and where EA analysis is really important, and “generosity”, which is about being kind to the people around me.

Generosity is when my friend’s family has a health crisis and I come over with $100 worth of takeout and frozen food. It’s also generosity when I support my local arts and/or religious communities, and when I go out of my way to financially support free media. Generosity is good and we should feel good about it. It’s one of the ways we live our values. It can be personal and subjective and can be about feelings as much as ROI. In fact, it is inherently subjective, and the right specific generous acts should be different for different people, because they are distributed like tastes, interests, friendships, communities, and other personal attachments.

Tzedakah is deciding to donate 10% of my income to saving lives in the developing world, and doing my research to make sure it’s doing as much good as possible. Tzedakah is saying BED NETS BED NETS BED NETS. Tzedakah is a sense of urgency to make the world better for people I will never meet and who will never know or care about me personally.2 Tzedakah isn’t a corner I want to cut to buy something nice for myself.3

“What about the arts?” Sure, generosity.  But don’t cut your bednet budget for it.
“Donating based on numbers ruins the make-the-donor-a-better-person function of charity.” It arguably taints generosity but not tzedakah.
“I don’t need to feel guilty not donating to help my friend’s cousin coming back from Iraq because it’s more effective to…” No, you don’t need to feel guilty because when and how to be generous is personal choice.  Stop arguing it’s objectively wrong.

I’m so glad we could clear this up