In Defense Of The Sunk Cost Fallacy

Dutch disease is the economic concept that if a country is too rich in one thing, especially a natural resource, every other sector of the economy will rot because all available money and talent will flow towards that sector.  Moreover, that sector dominates the exchange rate, making all other exports uncompetitive.*  It comes up in foreign development a lot because charitable aid can cause dutch disease: by paying what the funders would consider a “fair wage”, charities position themselves as by far the best employers in the area.  The best and the brightest African citizens end up chauffering foreigners rather than starting their own businesses, which keeps the society dependent on outside help.  Nothing good comes from having poverty as your chief export.

I posit that a similar process takes place in corporations.  Once they are making too much money off a few major things (Windows, Office, AdWords, SUVs), even an exceptionally profitable project in a small market is too small to notice.  Add in the risk of reputation damage and the fact that all projects have a certain amount of overhead regardless of size, and it makes perfect sense for large companies to discard projects a start up would kill for (RIP Reader).**

That’s a fine policy in moderation, but there are problems with applying it too early.  Namely, you never know what something is going to grow into.  Google search originally arose as a way to calculate impact for academic papers. The market for SUVs (and for that matter, cars) was 0 until someone created it.  If you insist on only going after projects that directly address an existing large market, the best you’ll ever be is a fast follower.***

Simultaneously, going from zero to an enormous, productive project is really, really hard (see: Fire Phone, Google+, Facebook’s not-an-operating-system).  Even if you have an end goal in mind, it often makes sense to start small and iterate.  Little Bets covers this in great detail.  And if you don’t have a signed card from G-d confirming your end goal is correct, progressing in small iterative steps gives you more information and more room to pivot.

More than one keynote at EA Global talked about the importance of picking the most important thing, and of being willing to switch if you find something better.  That’s obviously great in in some cases, but I worry that this hyperfocusing will cause the same problems for us that it does at large companies: a lack of room to surprise ourselves.  For example, take the post I did on interpretive labor.  I was really proud of that post.  I worked hard on it.  I had visions of it helping many people in their relationships.  But if you’d asked at the time, I would have predicted that the Most Effective use of my time was learning programming skills to increase my wage or increase my value in direct work, and that that post was an indulgence.   It never in my wildest dreams occurred to me it would be read by someone in a far better position than me to do something about existential risk and be useful to them in connecting two key groups that weren’t currently talking to each other, but apparently it did.  I’m not saying that I definitely saved us from papercliptopia, but it is technically possible that that post (along with millions of other flaps of butterfly wings) will make the marginal difference.  And I would never have even known it did so except the person in question reached out to me at EA Global.****

Intervention effectiveness may vary by several orders of magnitude, but if the confidence intervals are just as big it pays to add a little wiggle to your selection.  Moreover, constant project churn has its own cost: it’s better to finish the third best thing than have to two half finished attempts at different best things.  And you never know what a 3rd best project will teach you that will help an upcoming best project- most new technological innovations come from combining things from two different spheres (source), so hyperfocus will eventually cripple you.

In light of all that, I think we need to stop being quite so hard on the sunk cost fallacy.  No, you should not throw good money after bad, but constantly re-evaluating your choices is costly and (jujitsu flip) will not always be most efficient use of your resources.  In the absence of a signed piece of paper from G-d, biasing some of your effort towards things you enjoy and have comparative advantage in may in fact be the optimal strategy

Using your own efficiency against you

My hesitation is that I don’t know how far you can take this before it stops being effective altruism and starts being “feel smug and virtuous about doing whatever it is you already wanted to do”- a thing we’re already accused of doing.  Could someone please solve this and report back?  Thanks.

* The term comes from the Dutch economic crash following the discovery of natural gas in The Netherlands.  Current thought is that was not actually Dutch disease, but that renaming the phenomenon after some third world country currently being devastated by it would be mean.

*Simultaneously, developers have become worse predictors of the market in general. Used to be that nerds were the early adopters and if they loved it everyone would be using it in a year (e.g. gmail, smart phones).  As technology and particularly mobile advances, this is no longer true.  Nerds aren’t powerusers for tablets because we need laptops, but tablet poweruser is a powerful and predictive market.  Companies now force devs to experience the world like users (Facebook’s order to use Android) or just outright tell them what to do (Google+).  This makes their ideas inherently less valuable than they were.  I don’t blame companies for shifting to a more user-driven decision making process, but it does make things less fun.

**Which, to be fair, is Microsoft’s actual strategy

***It’s also possible it accomplished nothing, or makes it worse.  But the ceiling of effectiveness is higher than I ever imaged and the uncertainty only makes my point stronger.

Food Choices at EA Global

[EAGlobal was a wonderful experience that I haven’t written much about because my brain was too stuffed full of wonderfulness to produce anything useful.  I dislike that the first thing I’m writing about it is a controversy/complaint]

There’s a utilitarian thought experiment: would you rather have one person tortured for their entire life, or a googolplex of people experience a single dust mote in their eye?  I always viewed it as too theoretical to be anything but an ideological purity test, but I think I’m seeing a version of it in action right now, in the debate around serving animal products at EA Global.

You have a small number of animal rights activists saying “this is torturing and consuming a sentient being and that’s morally abhorent”, and a much larger number of omnivores going “but seriously, they’re delicious”.  The ARAs don’t understand why aesthetic preferences are overriding morality (and either don’t believe that animal products are ever medically necessary or don’t believe that outweighs the cost to the animal), and the omnivores don’t see why such a small group is getting to override their preferences because of a principle they don’t believe in.

I think the moral weight of the ARA’s concerns may actually be working against them here.  I don’t think many people would object if the organizers said “the local cuisine is vegan and shipping in meat is just too expensive, bring some in your luggage if you must.”  But the fact that the morality arguments exist and tend to resonate with people even if they don’t agree makes people defensive, and then aggressive.  Allowing the organizers to drop meat for morality reasons is an implicit endorsement of the idea that meat is indeed immoral, which has unpleasant implications for omnivore’s moral standing the rest of the week.  By the Copenhagen interpretation of ethics, better to deny that there is a problem than participate in an incomplete solution.

My original position, based mostly on the fact that I am simultaneously really bothered by and completely immune to ARA’s disgust-based arguments, was that EA Global had made the right call: vegan or at least vegetarian options in the main line, a small amount of meat hidden off to the side.  But now that I think the insistence on meat is strongly Copenhagen-driven, I’ve changed my mind.  Admitting unpleasant things about ourselves and making incremental progress is supposed to be one of our things.

[By that same token I think ARA’s should be a little happier about how much meat consumption was reduced that weekend, even if it didn’t go to zero.  But then, I’m an incrementalist]

At the same time, some people need animal products.  The definition of need is tricky here- my doctor has told me to eat small amounts of meat, but going three days without any will be fine, but in practice what was served at EA Global was too hard on my stomach and I wouldn’t have been able to eat enough calories from that alone.  Some people are on paleo and even if that wasn’t the healthiest choice, a sudden drop off in meat will be physically hard on them.  Some people have a lot of things they can’t eat such that meat is the easiest way to get them a nutritionally complete meal- especially when you have a lot of different people with a lot of different exclusions.  But even if meat were served, it’s impossible to fulfill 600 people’s dietary requirements with a reasonable amount of effort and money. The best solution may have been to announce the menu ahead of time so people could plan, and then let the chips fall where they may.

But I think we can do one better.  My new favorite solution is to offer both meat and whatever vegans nominate as the best fake meat and offer both without a way to distinguish between the two at the time.  Omnivores would be given one at random with a code that they could later use to register 1.  how much they liked what they were served and 2.  whether they think it was real meat or not.  If they really don’t like what they got they could go to a back room somewhere with their code and ask for the other one (still not telling them which they got).  The same back room could serve people who medically need meat and people who want the definitively vegan option.

This gives people who want but don’t need meat (and are able to eat !meat) a way to get it, and vegans a way to advance the cause of veganism, possibly further than they would get by banning it (by showing people how good !meat tastes).  In most circles neither side would find this adequate, but Experimenting and Using Data are What Effective Altruists  Do, and I think that could convince/pressure enough people (on both sides) into it that it would be worth trying.

Mission Statement

As you may have guessed from the previous two posts I’m not happy at my current job and looking for a better one.  Some of that is figuring out what kind of environment I work best in and some of it is developing skills, but I also needed to figure out what problems I wanted to solve.  This was where So Good They can’t Ignore You was so helpful- it helped me realize I needed to look at what would make me feel most impactful, rather than what I would find entertaining.

I know what things interest me: health, poverty, education, psychology, video games, mental health, nutrition, medicine… but no one else seems to think these things are as linked as I do.  I think I finally what they have in common, for me: they waste potential.  People who could have done great things die or don’t have the money to pursue them or are too sick or pained or no one will teach them the necessary skills.  That’s tragic.  That’s loss on an enormous scale.

I don’t find anti-poverty work as emotionally compelling as the intricacies of mental health or CFAR.  But if I bother to think it through, I realize there’s a lot of people who will never get to the level of calibrating their predictions because they’re starving, or because all of their mental energy is going to keeping them from starving.  In a very real sense, giving money to poor people is one of the most effective rationality-raising interventions possible.

So that’s my goal. Remove things that are keeping the most people from being all they can be.

IQ Tests and Poverty

Recently I read Poor Economics, which is excellent at doing what it promises: explaining the experimental data we have for what works and does not work in alleviating third world poverty, with some theorizing as to why.  If that sounds interesting to you, I heartily recommend it.  I don’t have much to add to most of it, but one thing that caught my eye was their section on education and IQ tests.

In Africa and India, adults believe that the return to education is S-shaped (meaning each additional unit of education is more valuable than the one before, at least up to a point).  This leads them to concentrate their efforts on the children that are already doing the best.  This happens at multiple levels- poor parents pick one child to receive an education and put the rest to work much earlier, teachers put more of their energy into their best students.  Due to a combination of confirmation bias and active maneuvering, the children of rich parents are much more likely to be picked as The Best, regardless of their actual ability.   Not only does this get them more education, but education is viewed as proof one is smart, so they’re double winners.  This leaves some very smart children of poor parents operating well below their potential.

One solution to this is IQ tests.  Infosys, an Indian IT contractor, managed to get excellent workers very cheaply by giving IQ tests to adults and hiring those who scored well, regardless of education.  The authors describe experiments in Africa giving IQ tests to young children so that teachers will invest more in the smart but poor children.  This was one of the original uses of the SATs in America- identifying children who were very bright but didn’t have the money or connections to go to Ivy League feeder high schools.

This is more or less the opposite of how critics view standardized testing the US.  They believe the tests are culturally biased such that a small sliver of Americans will always do better, and that basing resource distribution on those tests disenfranchises the poor and people outside the white suburban subculture.  What’s going on here?

One possible explanation is that one group or the other is wrong, but both sides actually have pretty good evidence.  The IQ tests are obviously being used for the benefit of very smart poor children in the 3rd world.  And even tests without language can’t get around the fact that being poor takes up brainspace, and so any test will systematically underestimate poor children. So let’s assume both groups are right at least some of the time.

Maybe it’s the difference in educational style that matters?  In the 3rd world, teachers are evaluated based on their best student.  In the US, No Child Left Behind codified the existing emphasis on getting everyone to a minimum bench mark.    Kids evaluated as having lower potential than they actually do may receive less education than they should, but they still get some, and in many districts gifted kids get the least resources of any point on the bell curve.

Or it could be because the tests are trying to do very different things.  The African and Indian tests are trying to pick out the extremely intelligent who would otherwise be overlooked.  The modern US tests are trying to evaluate every single student and track them accordingly.  When the SATs were invented they had a job much like the African tests; as more and more people go to college its job is increasingly to evaluate the middle of the curve.  It may be that these are fundamentally different problems.

This has to say something interesting about the meaning of intelligence or usefulness of education, but I’m not sure what.

Status Through Disbelief

Reading The Remedy, or really anything about the time after formalized western medicine but before the germ theory of disease, is an exercise in terror or frustration.  How could anyone think attending a childbirth with autopsy gunk on your hands was a good idea?  Or leaches.  Who looked at those and said “I’ll bet those will make people healthier”?

My first reaction reading The Colony, about a Hawaiian leper colony founded shortly after the germ theory became entrenched, was “oh no doctors, you overapplied the lesson.”  Leprosy has an epidemiology a lot like tuberuclosis: long periods between infection and symptoms, and an ease of spreading that means everyone is constantly exposed to it.  This makes it look like an inborn condition, not a contagion.  Leprosy and TB are actually pretty closely related too.  I assumed that doctors looked at their failure with TB and overcorrected.  It didn’t work because only a small fraction of people are suspectible, and (it’s implied although never stated outright) they will be exposed to it whether symptomatic patients are quarantined or not.

Then I remembered that shunning lepers* predates germ theory by a couple of thousand years.   Ancient and medieval people were completely capable of identifying disease as contagious and instituting a separation.  So why didn’t industrial-age doctors?

Then I remembered that while the peasantry considered it obvious that disease was contagious and should be shunned, they considered it equally obvious that leprosy was punishment from God for sin and the black plague could be avoided by killing Satan’s minions, the cats.  Nobody talks about all the things everyone knew that doctors correctly disbelieved in.

Without a lot of proof, I strongly suspect that doctors signaled intellectual rigor and membership in the medical class by disbelieving things the peasantry believed.  Believing things the peasantry does believe doesn’t signal either of those things even if the belief is correct.  No one gets credit for believing eating food is good and eating Belladonna is bad.  If you’re not very careful in that environment, it’s easy for peasants’ belief in something to become evidence against it.

This is similar to the process of the toxoplasma of rage, in which people signal membership in an ingroup by loudly believing its most dubious claims.  I also highly suspect it’s what’s going on with dietary constraint and toxins.  It is obviously true that what you eat matters, some things you put in your body will damage your cells, getting rid of them is good, and there are things you can take to get rid of them.  It’s called heavy metal poisoning and chelation.  Or if you’re Huey the dopamine dog, chocolate and activated charcoal.  But dietary constraints and belief that specific things were bad for you got associated with special snowflakenes, so you can signal intellectual rigor by dismissing them.  This despite the fact that nutrition obviously makes a difference in your health, that humans vary across many dimensions and there’s no reason to assume they wouldn’t vary across digestion and nutritional needs.  Likewise things we put in our mouth obvious have the capacity to hurt us and there’s no reason to assume we have an exhaustive list of those, or that they’re identical across all humans.

In D&D terms: people are advertising their will save bonus by how credible an idea they can disbelieve.  No one wants to be this guy:


[Thor rushes Loki, only to run through the illusion and trap himself in the cage]

Disbelieving everything is an easy way to be right most the vast majority of the time.  For every correct idea that’s an almost infinite number of wrong ones, and even those that are true are incomplete (see: physics, Newtonian).  But if everyone disbelieves everything, we will never discover anything new.

I’m not in a position to criticize anyone for being frustrated at people for being wrong.  I lived that life for a long time.   But I try to counter it now by remembering that humans aren’t really capable of distinguishing “laughably wrong” from “correct, and world changing” without investing a lot of energy.  If there aren’t negative externalities and they’re not asking anything from me, their investment  in their crackpot idea is something like an insurance policy for me, or a lottery ticket.  Most won’t pay off, but when they do I’ll be glad they were there.

“Minimal negative externalities” and “at no cost to me” are important caveats.  Children need vaccinations, and I don’t want the government paying for medicinal prayer.  But if a functional, taxpaying citizen wants to spend their own money to get their chakras realigned every six months?  Yelling at them seems like a waste of energy.  Hell, they may have a genetic variation that enhances the placebo effect to the point it is medically significant.  The human brain is weird and we don’t even know what all the pieces are, much less how they work.  If someone investigates something that’s a positive for me, even if all they do is conclusively prove it doesn’t work.

chakras

You can believe people are wrong, you don’t have to accept all ideas as equally valid.  But what I would suggest, and what I’m attempting to do myself, is to make the amount of energy you put into your disbelief proportional to the harm the idea causes, not its wrongness.  To have wrong ideas drop out of sight, resurfacing only if they cause problems or turn out to be a winning lottery ticket.   I think that on net this leads to a better world, and in the meantime I’m calmer and less annoyed.

*Which really means shunning anyone with skin discoloration, ancient people not being entirely up on their bacteriology.

Effectiveness Updates

Suicide Hotlines: One of the reasons I estimated crisis chat/suicide hotlines as having as high an impact as I did was visitors’ self-reports.  Since then I saw a discussion of leafleting on FB, where several people said they had high priors for it working because when they leafleted, some people seemed really interested and said they were going to change.  My reading of the quantitative research is that there’s no proof leafleting has any effect, so I should discount my own estimates of crisis chat.  I also completely failed to account for the damage a bad counselor can do.  I found one metastudy on the effect of suicide hotlines, and it appears debatable they’re accomplishing anything, much less have a good cost:benefit ratio.

I may also be pessimistic because I had two people initiate attempts while they were talking to me in a month.

Blood donation: I was hoping to talk to the actual blood bank, but they’re no longer returning my e-mails and this has gone on too long. I stand by the calculations for average effectiveness, but after a discussion with Alexander Berger I am retracting the claim for marginal effectiveness.  New donors can apparently be recruited rather cheaply.  There are concerns that the blood is more likely to carry infection (not all of which can be caught by testing), or that incentives will crowd out more altruistic donations, or that the marginal cost will grow with time, but these have the whiff of valuing moral purity over results.  You could convince me otherwise, but at this point it needs data.  On the other hand, blood donation may have health benefits, especially if you don’t menstruate (thanks to Kate Donovan for pointing this out).

Seattle Effective Altruist’s “Be Excellent To Each Other” Policy

Several months ago I started to write SEA’s anti-harassment policy.  It morphed a little bit as I realized that “no harassment” was a perfectly good goal for cons, but not sufficient for a group that sometimes discusses contentious issues.  We debated names for a while until  I finally arrived at the “Be Excellent To Each Other Policy.”  Several other groups have requested a copy of this and it’s hard to share as a FB doc, so I’m reproducing it below.

Looking at it now I’m shocked at how legalistic it is, I think it’s a combination of I was freshly worried from a QALY discussion and my mom’s a lawyer.

The “Be Excellent To Each Other” policy

It is the goal of Seattle Effective Altruists that all members feel safe and respected at all times.  This does not preclude disagreement in any way, we welcome differing points of view.  But it does preclude personal attacks, unwanted touching (unsure if a particular touch is wanted?  ASK), and deliberate meanness.  This policy applies to all members, but we are conscious that some people have traveled a more difficult road than others and are more likely to encounter things that make them feel unsafe, and are committed to countering that.
If you are wondering if something you are about to say follows the policy, a good rule of thumb is that it should be at least two of true, helpful, and kind.  This is neither necessary nor sufficient, but it is very close to both.
If you find something offensive (including but not limited to: racist, sexist, homophobic, transphobic, ableist, etc) or it otherwise makes you uncomfortable (including but not limited to harassment, dismissal, unwanted romantic overtures), we encourage you to speak up at the time if you feel comfortable doing so.  We hope that all our members would want to know that they have said something offensive.  If you do not feel comfortable speaking up at the time, please tell a membership of the leadership (currently John, Elizabeth, and Stephanie) as soon as possible, in whatever format you feel comfortable (in person, facebook, e-mail, etc).   Depending on the specifics we will address it with the person in question, change a policy, and/or some other thing we haven’t thought of yet.
If someone tells you they find something you (or someone you agree with) said offensive, you do not have to immediately agree with them.  But please understand that it is not an attack on you personally, and quite possibly very scary for them to say.  If you did not mean to be offensive, express that, and listen to what the person has to say.  if you are a bystander, please convey your respect and support for both people without silencing either.
If you did mean to be offensive, leave.  Deliberate personal attacks will not be tolerated.  Repeated non-deliberate offensiveness will be handled on a case by case basis.
SEA is not in a position to police the behavior of our members outside our meetings and online presence (e.g. the facebook message board), and will not intervene normal interpersonal disagreements.  But if you feel unsafe attending a meeting because of a member’s extra-group behavior (including but not limited to threatening, stalking, harassment, verbal attacks, or assault), please talk to the leadership.  We will not have group members driven out by others’ bad behavior.
This is a living document.  We can’t foresee all possible problems, or remove the necessity for judgement calls.  But we hope that this sets the stage for Seattle Effective Altruists as a respectful community, and encourage you to talk to us if you concerns or suggestion

Meeting protocols don’t scale

I covered a lot of really heavy things with no good solutions in group organization the last three days.  Now I’d like to talk about one easy thing that we solved brilliantly.

When Seattle EA first started, all meetings were discussion meetings that operated with discussion norms.  One person came prepared to lead a discussion, which meant both presenting information and steering the conversation.  This worked great with 8 people and was increasingly creaky at 15.  Physically it became harder to find spaces where everyone could hear and no one had to shout.  Conversationally, a tangent rate that led to charming new discoveries with 8 people led to huge snarls with 15, increasing the brain power required to moderate just as presenting got difficult.  We solved this by splitting moderation and presenting into two different jobs (often two people trade off between them in the same meeting), and shifting meetings to be more presentation and questions, and the moderator steering tangents back to the meeting purpose.  These are not the same as the old meetings, that’s not possible, but they are just as good at what they do.

Questioning Questions

On Monday I mentioned that one persons’ polite question is another’s attack.  I want to dig into that a bit more.

Here are a few things questions can be:

  • A genuine request for more information.
  • An implicit criticism (“you’re wrong for not doing it this way”)
  • An implicit compliment (“You are so interesting I want to hear you talk more”)
  • An attempt to signal caring or knowledge of a person.
  • An attempt to signal how smart you are.
  • An attempt to stall, derail, or raise the cost of talking about a topic such that the original speaker is unable to make progress on their original point.*
  • Other things I haven’t thought of.

Subcultures vary in how you signal what kind of question yours is, which can lead to really massive culture clash.  Rationalist and EA culture are on the high end of asking questions, and the low end of explicitly signalling respect for the speaker as you ask (because asking is not considered a sign of disrespect), which can lead to problems for people who don’t know that’s what’s happening or don’t realize that this comes with a corresponding freedom to say “This is not fun and I’d like to talk about something else now.”

So one reason people may get mad at you for “just asking questions”, even ones you sincerely meant as requests for information, is that they mistook it for an attack or derail.  The internet really came through for us by labeling this JAQing off.

The second, subtler reason people might get mad at a question is about interpretive labor.  As I explained yesterday, interpretive labor is the work to understand and adjust to another person- anything from the strain to listen to them against high background noise to knowing “come by tomorrow” doesn’t mean anything unless they give a time.  Everyone is doing this any time they are interacting with another human being, but some people do more of it than others.  In general the lower status, more marginalized, or further from a particular group’s default member you are, the more interpretive labor you have to do.

When you ask a question, even a sincerely meant one, you are asking the person to put effort into explaining themselves to you.  That’s interpretive labor (on both your parts, if you’re attempting to learn more about them). Sometimes being asked to invest that labor, and especially being told you’re wrong if you don’t, is really annoying.  It is especially annoying when you are talking about a way being low status/privilege hurts you, and the person demanding you explain is higher status/privilege.

Interpretive labor is one reason people might like to form subgroups from a larger group even when the larger group is absolutely free of racism/sexism/etc.  That is most often women and minorities splitting off from groups made up mostly of men/white people, or where whiteness/maleness is considered the default even though they’re not that much more prevalent.  But male nannies totally deserve a place where they’re not constantly asked “wait, you’re taking care of the kids?” just like business women deserve a place where they’re not constantly being asked how the balance work and kids.

What does this mean for Effective Altruism?  EA has very strong norms in favor of asking questions, and a lot of good comes from it, but that subtly pushes away people who have the least energy for questions.  Energy available for questions is not randomly distributed, so that creates blind spots.  I have some guesses as to how to reduce this, but I don’t think there’s a way to get rid of it while keeping the things that make EA good at what it is.  This is now my reason to donating to certain charities outside of EA.

Possible mitigations:

  • Destigmatize opting out of conversations/arguments.  Especially making it clear to new people they can opt out without stigma.
  • Have meetings on specific issues focus on listening rather than debating.  This is what I would have done with the sexual assault meeting, had it happened.
  • As an individual, consider how much work you’re asking someone to do before asking a question, especially if they already seem emotionally taxed.
  • Push back against JAQing off.  If you figure out how to do this perfectly without collateral damage please tell me because I have no idea.
  • Look for other ways to get information than directly questioning someone who is talking.  The internet is full of things.  You might also ask a friend who can put your question in context, rather than a stranger who knows nothing but that you’re questioning them.
  • On that note: I’m officially volunteering to be your Female Friend That Explains Sexism.  If you have a question but don’t want to make the person who introduced the issue explain it to you, you can ask me.

*For example, this boing boing article is about a science editor calling a black female scientist an “urban whore” for refusing to write for him for free.  The comment thread is 35 pages of “but can you prove that was racist?”, so no one ever got to discuss the massive sexism or entitlement or possible actions to take.  I would not have been noticeably happier with the guy if he’d called her a rural whore, even though that’s less racially coded.

Interpretive Labor

In Utopia of Rules David Graeber introduces the concept of interpretive labor.  This will be stunningly useful in discussing how to handle sensitive discussions and yet there’s nothing on the internet about it, so please forgive this digression to explain it.

No two people are alike, everyone interprets a given action a little differently.  Often you need to put work in to understand what people mean.  This can be literal- like straining to understand an accent- or more figurative- like remembering your chronically late friend is from a culture where punctuality is not a virtue, so it’s not a sign they don’t value you.  The work it takes to do that is interpretive labor.  Interpretive labor also includes the reverse: changing what you would naturally do so that the person you are talking to will find it easier to understand.  Tell culture is in large part an attempt to reduce the amount of interpretive labor required.  Here are a few examples of interpretive labor in action:

  • Immigrants are expected to adopt the cultural norms of their new country.
  • Parents spend endless hours guessing whether their infant is crying because it’s hungry, needs a fresh diaper, or just felt like screaming.
  • Newbies to a field need to absorb jargon to keep up, or experts need to slow themselves down and explain it.
  • There’s a whole chain of schools that teach poor, mostly minority students business social norms, by which they mean white-middle-class norms.  There is no school teaching white middle class kids how to use the habitual be properly.
  • Crucial Conversations/Non-Violent Communication/John Gottman’s couples therapy books/How to Talk so Kids Will Listen and Listen so Kids Will Talk are all training for interpretive labor.
  • Graeber himself is talking about interpretative labor in the context of bureaucratic forms, which can simultaneously dump a lot of interpretative labor on the individual (bank forms don’t negotiate) but alleviate the need to be nice to clerks.
  • Comments in code.

With a few very specific exceptions (accents, certain disabilities, ), interpretive labor flows up the gradient of status or privilege.  This can get pretty ugly

  • People who insist their code is self documenting.
  • Girls are told “snapping your bra means he likes you” and then expected to no longer be mad about it.
  • Bullied kids are told to forgive and forget because their bully “is trying to say they’re sorry”, even after repeated cycles of faux-apologies and bullying.
  • This is more tenuous, but I think there’s a good argument that a lot of the emotional abuse on the estranged parent boards comes from parents expecting unfair amounts of interpretive labor from their children, adult or minor.
  • Fundamentalist husband expects his wife to know his emotions and correct for them while he actively hides the emotion from himself.
  • A paraphrased quote from Mothers of Invention: A woman’s house slave has run away, greatly increasing the amount of work she has to do herself.   She writes in her diary “Oh, if only she could think of things from my point of view, she never would let me suffer so.”
  • Poor people are more empathetic than rich people.

I think a large part of the anger around the concept of trigger warnings is related to interpretive labor.  It shifts the burden from traumatized listeners to protect themselves or calm themselves down, to speakers to think ahead if something they are about to say could be upsetting.  That’s work.  Speaking without thinking is much easier.  Like, stupidly easy.  Ditto for Christians who feel they’re being oppressed when they’re asked to consider that not everyone has the same beliefs.  That is way more work than assuming they do.

How does this relate to altruism?  Charity generally flows down the status/privilege gradient, especially from rich to poor.  If the givers don’t consciously change the rules, they will end up demanding large amounts of interpretive labor from their beneficiaries, and do less good in the process.  Arguably this is what’s happening when Heifer International gives people livestock and they immediately sell it- the rich people decided what to give without sufficient input from the poor people they were giving it to, and the poor people had to do extra work to translate it into something they want.  Or this post on Captain Awkward, from a woman trying to teach her tutoring volunteers to not be racist.

EDIT 9/7/18: I think I in appropriately conflated two different situations in this post: situations where interpretive labor closes the whole gap (e.g. understanding an accent), and things where even after correct interpretation there is still a problem. The problem in that bullying example isn’t just that the victim doesn’t understand how the bully wants to apologize, it’s that the bully is going to keep bullying.