IQ Tests and Poverty

Recently I read Poor Economics, which is excellent at doing what it promises: explaining the experimental data we have for what works and does not work in alleviating third world poverty, with some theorizing as to why.  If that sounds interesting to you, I heartily recommend it.  I don’t have much to add to most of it, but one thing that caught my eye was their section on education and IQ tests.

In Africa and India, adults believe that the return to education is S-shaped (meaning each additional unit of education is more valuable than the one before, at least up to a point).  This leads them to concentrate their efforts on the children that are already doing the best.  This happens at multiple levels- poor parents pick one child to receive an education and put the rest to work much earlier, teachers put more of their energy into their best students.  Due to a combination of confirmation bias and active maneuvering, the children of rich parents are much more likely to be picked as The Best, regardless of their actual ability.   Not only does this get them more education, but education is viewed as proof one is smart, so they’re double winners.  This leaves some very smart children of poor parents operating well below their potential.

One solution to this is IQ tests.  Infosys, an Indian IT contractor, managed to get excellent workers very cheaply by giving IQ tests to adults and hiring those who scored well, regardless of education.  The authors describe experiments in Africa giving IQ tests to young children so that teachers will invest more in the smart but poor children.  This was one of the original uses of the SATs in America- identifying children who were very bright but didn’t have the money or connections to go to Ivy League feeder high schools.

This is more or less the opposite of how critics view standardized testing the US.  They believe the tests are culturally biased such that a small sliver of Americans will always do better, and that basing resource distribution on those tests disenfranchises the poor and people outside the white suburban subculture.  What’s going on here?

One possible explanation is that one group or the other is wrong, but both sides actually have pretty good evidence.  The IQ tests are obviously being used for the benefit of very smart poor children in the 3rd world.  And even tests without language can’t get around the fact that being poor takes up brainspace, and so any test will systematically underestimate poor children. So let’s assume both groups are right at least some of the time.

Maybe it’s the difference in educational style that matters?  In the 3rd world, teachers are evaluated based on their best student.  In the US, No Child Left Behind codified the existing emphasis on getting everyone to a minimum bench mark.    Kids evaluated as having lower potential than they actually do may receive less education than they should, but they still get some, and in many districts gifted kids get the least resources of any point on the bell curve.

Or it could be because the tests are trying to do very different things.  The African and Indian tests are trying to pick out the extremely intelligent who would otherwise be overlooked.  The modern US tests are trying to evaluate every single student and track them accordingly.  When the SATs were invented they had a job much like the African tests; as more and more people go to college its job is increasingly to evaluate the middle of the curve.  It may be that these are fundamentally different problems.

This has to say something interesting about the meaning of intelligence or usefulness of education, but I’m not sure what.

Status Through Disbelief

Reading The Remedy, or really anything about the time after formalized western medicine but before the germ theory of disease, is an exercise in terror or frustration.  How could anyone think attending a childbirth with autopsy gunk on your hands was a good idea?  Or leaches.  Who looked at those and said “I’ll bet those will make people healthier”?

My first reaction reading The Colony, about a Hawaiian leper colony founded shortly after the germ theory became entrenched, was “oh no doctors, you overapplied the lesson.”  Leprosy has an epidemiology a lot like tuberuclosis: long periods between infection and symptoms, and an ease of spreading that means everyone is constantly exposed to it.  This makes it look like an inborn condition, not a contagion.  Leprosy and TB are actually pretty closely related too.  I assumed that doctors looked at their failure with TB and overcorrected.  It didn’t work because only a small fraction of people are suspectible, and (it’s implied although never stated outright) they will be exposed to it whether symptomatic patients are quarantined or not.

Then I remembered that shunning lepers* predates germ theory by a couple of thousand years.   Ancient and medieval people were completely capable of identifying disease as contagious and instituting a separation.  So why didn’t industrial-age doctors?

Then I remembered that while the peasantry considered it obvious that disease was contagious and should be shunned, they considered it equally obvious that leprosy was punishment from God for sin and the black plague could be avoided by killing Satan’s minions, the cats.  Nobody talks about all the things everyone knew that doctors correctly disbelieved in.

Without a lot of proof, I strongly suspect that doctors signaled intellectual rigor and membership in the medical class by disbelieving things the peasantry believed.  Believing things the peasantry does believe doesn’t signal either of those things even if the belief is correct.  No one gets credit for believing eating food is good and eating Belladonna is bad.  If you’re not very careful in that environment, it’s easy for peasants’ belief in something to become evidence against it.

This is similar to the process of the toxoplasma of rage, in which people signal membership in an ingroup by loudly believing its most dubious claims.  I also highly suspect it’s what’s going on with dietary constraint and toxins.  It is obviously true that what you eat matters, some things you put in your body will damage your cells, getting rid of them is good, and there are things you can take to get rid of them.  It’s called heavy metal poisoning and chelation.  Or if you’re Huey the dopamine dog, chocolate and activated charcoal.  But dietary constraints and belief that specific things were bad for you got associated with special snowflakenes, so you can signal intellectual rigor by dismissing them.  This despite the fact that nutrition obviously makes a difference in your health, that humans vary across many dimensions and there’s no reason to assume they wouldn’t vary across digestion and nutritional needs.  Likewise things we put in our mouth obvious have the capacity to hurt us and there’s no reason to assume we have an exhaustive list of those, or that they’re identical across all humans.

In D&D terms: people are advertising their will save bonus by how credible an idea they can disbelieve.  No one wants to be this guy:


[Thor rushes Loki, only to run through the illusion and trap himself in the cage]

Disbelieving everything is an easy way to be right most the vast majority of the time.  For every correct idea that’s an almost infinite number of wrong ones, and even those that are true are incomplete (see: physics, Newtonian).  But if everyone disbelieves everything, we will never discover anything new.

I’m not in a position to criticize anyone for being frustrated at people for being wrong.  I lived that life for a long time.   But I try to counter it now by remembering that humans aren’t really capable of distinguishing “laughably wrong” from “correct, and world changing” without investing a lot of energy.  If there aren’t negative externalities and they’re not asking anything from me, their investment  in their crackpot idea is something like an insurance policy for me, or a lottery ticket.  Most won’t pay off, but when they do I’ll be glad they were there.

“Minimal negative externalities” and “at no cost to me” are important caveats.  Children need vaccinations, and I don’t want the government paying for medicinal prayer.  But if a functional, taxpaying citizen wants to spend their own money to get their chakras realigned every six months?  Yelling at them seems like a waste of energy.  Hell, they may have a genetic variation that enhances the placebo effect to the point it is medically significant.  The human brain is weird and we don’t even know what all the pieces are, much less how they work.  If someone investigates something that’s a positive for me, even if all they do is conclusively prove it doesn’t work.

chakras

You can believe people are wrong, you don’t have to accept all ideas as equally valid.  But what I would suggest, and what I’m attempting to do myself, is to make the amount of energy you put into your disbelief proportional to the harm the idea causes, not its wrongness.  To have wrong ideas drop out of sight, resurfacing only if they cause problems or turn out to be a winning lottery ticket.   I think that on net this leads to a better world, and in the meantime I’m calmer and less annoyed.

*Which really means shunning anyone with skin discoloration, ancient people not being entirely up on their bacteriology.

Effectiveness Updates

Suicide Hotlines: One of the reasons I estimated crisis chat/suicide hotlines as having as high an impact as I did was visitors’ self-reports.  Since then I saw a discussion of leafleting on FB, where several people said they had high priors for it working because when they leafleted, some people seemed really interested and said they were going to change.  My reading of the quantitative research is that there’s no proof leafleting has any effect, so I should discount my own estimates of crisis chat.  I also completely failed to account for the damage a bad counselor can do.  I found one metastudy on the effect of suicide hotlines, and it appears debatable they’re accomplishing anything, much less have a good cost:benefit ratio.

I may also be pessimistic because I had two people initiate attempts while they were talking to me in a month.

Blood donation: I was hoping to talk to the actual blood bank, but they’re no longer returning my e-mails and this has gone on too long. I stand by the calculations for average effectiveness, but after a discussion with Alexander Berger I am retracting the claim for marginal effectiveness.  New donors can apparently be recruited rather cheaply.  There are concerns that the blood is more likely to carry infection (not all of which can be caught by testing), or that incentives will crowd out more altruistic donations, or that the marginal cost will grow with time, but these have the whiff of valuing moral purity over results.  You could convince me otherwise, but at this point it needs data.  On the other hand, blood donation may have health benefits, especially if you don’t menstruate (thanks to Kate Donovan for pointing this out).

Seattle Effective Altruist’s “Be Excellent To Each Other” Policy

Several months ago I started to write SEA’s anti-harassment policy.  It morphed a little bit as I realized that “no harassment” was a perfectly good goal for cons, but not sufficient for a group that sometimes discusses contentious issues.  We debated names for a while until  I finally arrived at the “Be Excellent To Each Other Policy.”  Several other groups have requested a copy of this and it’s hard to share as a FB doc, so I’m reproducing it below.

Looking at it now I’m shocked at how legalistic it is, I think it’s a combination of I was freshly worried from a QALY discussion and my mom’s a lawyer.

The “Be Excellent To Each Other” policy

It is the goal of Seattle Effective Altruists that all members feel safe and respected at all times.  This does not preclude disagreement in any way, we welcome differing points of view.  But it does preclude personal attacks, unwanted touching (unsure if a particular touch is wanted?  ASK), and deliberate meanness.  This policy applies to all members, but we are conscious that some people have traveled a more difficult road than others and are more likely to encounter things that make them feel unsafe, and are committed to countering that.
If you are wondering if something you are about to say follows the policy, a good rule of thumb is that it should be at least two of true, helpful, and kind.  This is neither necessary nor sufficient, but it is very close to both.
If you find something offensive (including but not limited to: racist, sexist, homophobic, transphobic, ableist, etc) or it otherwise makes you uncomfortable (including but not limited to harassment, dismissal, unwanted romantic overtures), we encourage you to speak up at the time if you feel comfortable doing so.  We hope that all our members would want to know that they have said something offensive.  If you do not feel comfortable speaking up at the time, please tell a membership of the leadership (currently John, Elizabeth, and Stephanie) as soon as possible, in whatever format you feel comfortable (in person, facebook, e-mail, etc).   Depending on the specifics we will address it with the person in question, change a policy, and/or some other thing we haven’t thought of yet.
If someone tells you they find something you (or someone you agree with) said offensive, you do not have to immediately agree with them.  But please understand that it is not an attack on you personally, and quite possibly very scary for them to say.  If you did not mean to be offensive, express that, and listen to what the person has to say.  if you are a bystander, please convey your respect and support for both people without silencing either.
If you did mean to be offensive, leave.  Deliberate personal attacks will not be tolerated.  Repeated non-deliberate offensiveness will be handled on a case by case basis.
SEA is not in a position to police the behavior of our members outside our meetings and online presence (e.g. the facebook message board), and will not intervene normal interpersonal disagreements.  But if you feel unsafe attending a meeting because of a member’s extra-group behavior (including but not limited to threatening, stalking, harassment, verbal attacks, or assault), please talk to the leadership.  We will not have group members driven out by others’ bad behavior.
This is a living document.  We can’t foresee all possible problems, or remove the necessity for judgement calls.  But we hope that this sets the stage for Seattle Effective Altruists as a respectful community, and encourage you to talk to us if you concerns or suggestion

Meeting protocols don’t scale

I covered a lot of really heavy things with no good solutions in group organization the last three days.  Now I’d like to talk about one easy thing that we solved brilliantly.

When Seattle EA first started, all meetings were discussion meetings that operated with discussion norms.  One person came prepared to lead a discussion, which meant both presenting information and steering the conversation.  This worked great with 8 people and was increasingly creaky at 15.  Physically it became harder to find spaces where everyone could hear and no one had to shout.  Conversationally, a tangent rate that led to charming new discoveries with 8 people led to huge snarls with 15, increasing the brain power required to moderate just as presenting got difficult.  We solved this by splitting moderation and presenting into two different jobs (often two people trade off between them in the same meeting), and shifting meetings to be more presentation and questions, and the moderator steering tangents back to the meeting purpose.  These are not the same as the old meetings, that’s not possible, but they are just as good at what they do.

Questioning Questions

On Monday I mentioned that one persons’ polite question is another’s attack.  I want to dig into that a bit more.

Here are a few things questions can be:

  • A genuine request for more information.
  • An implicit criticism (“you’re wrong for not doing it this way”)
  • An implicit compliment (“You are so interesting I want to hear you talk more”)
  • An attempt to signal caring or knowledge of a person.
  • An attempt to signal how smart you are.
  • An attempt to stall, derail, or raise the cost of talking about a topic such that the original speaker is unable to make progress on their original point.*
  • Other things I haven’t thought of.

Subcultures vary in how you signal what kind of question yours is, which can lead to really massive culture clash.  Rationalist and EA culture are on the high end of asking questions, and the low end of explicitly signalling respect for the speaker as you ask (because asking is not considered a sign of disrespect), which can lead to problems for people who don’t know that’s what’s happening or don’t realize that this comes with a corresponding freedom to say “This is not fun and I’d like to talk about something else now.”

So one reason people may get mad at you for “just asking questions”, even ones you sincerely meant as requests for information, is that they mistook it for an attack or derail.  The internet really came through for us by labeling this JAQing off.

The second, subtler reason people might get mad at a question is about interpretive labor.  As I explained yesterday, interpretive labor is the work to understand and adjust to another person- anything from the strain to listen to them against high background noise to knowing “come by tomorrow” doesn’t mean anything unless they give a time.  Everyone is doing this any time they are interacting with another human being, but some people do more of it than others.  In general the lower status, more marginalized, or further from a particular group’s default member you are, the more interpretive labor you have to do.

When you ask a question, even a sincerely meant one, you are asking the person to put effort into explaining themselves to you.  That’s interpretive labor (on both your parts, if you’re attempting to learn more about them). Sometimes being asked to invest that labor, and especially being told you’re wrong if you don’t, is really annoying.  It is especially annoying when you are talking about a way being low status/privilege hurts you, and the person demanding you explain is higher status/privilege.

Interpretive labor is one reason people might like to form subgroups from a larger group even when the larger group is absolutely free of racism/sexism/etc.  That is most often women and minorities splitting off from groups made up mostly of men/white people, or where whiteness/maleness is considered the default even though they’re not that much more prevalent.  But male nannies totally deserve a place where they’re not constantly asked “wait, you’re taking care of the kids?” just like business women deserve a place where they’re not constantly being asked how the balance work and kids.

What does this mean for Effective Altruism?  EA has very strong norms in favor of asking questions, and a lot of good comes from it, but that subtly pushes away people who have the least energy for questions.  Energy available for questions is not randomly distributed, so that creates blind spots.  I have some guesses as to how to reduce this, but I don’t think there’s a way to get rid of it while keeping the things that make EA good at what it is.  This is now my reason to donating to certain charities outside of EA.

Possible mitigations:

  • Destigmatize opting out of conversations/arguments.  Especially making it clear to new people they can opt out without stigma.
  • Have meetings on specific issues focus on listening rather than debating.  This is what I would have done with the sexual assault meeting, had it happened.
  • As an individual, consider how much work you’re asking someone to do before asking a question, especially if they already seem emotionally taxed.
  • Push back against JAQing off.  If you figure out how to do this perfectly without collateral damage please tell me because I have no idea.
  • Look for other ways to get information than directly questioning someone who is talking.  The internet is full of things.  You might also ask a friend who can put your question in context, rather than a stranger who knows nothing but that you’re questioning them.
  • On that note: I’m officially volunteering to be your Female Friend That Explains Sexism.  If you have a question but don’t want to make the person who introduced the issue explain it to you, you can ask me.

*For example, this boing boing article is about a science editor calling a black female scientist an “urban whore” for refusing to write for him for free.  The comment thread is 35 pages of “but can you prove that was racist?”, so no one ever got to discuss the massive sexism or entitlement or possible actions to take.  I would not have been noticeably happier with the guy if he’d called her a rural whore, even though that’s less racially coded.

Interpretive Labor

In Utopia of Rules David Graeber introduces the concept of interpretive labor.  This will be stunningly useful in discussing how to handle sensitive discussions and yet there’s nothing on the internet about it, so please forgive this digression to explain it.

No two people are alike, everyone interprets a given action a little differently.  Often you need to put work in to understand what people mean.  This can be literal- like straining to understand an accent- or more figurative- like remembering your chronically late friend is from a culture where punctuality is not a virtue, so it’s not a sign they don’t value you.  The work it takes to do that is interpretive labor.  Interpretive labor also includes the reverse: changing what you would naturally do so that the person you are talking to will find it easier to understand.  Tell culture is in large part an attempt to reduce the amount of interpretive labor required.  Here are a few examples of interpretive labor in action:

  • Immigrants are expected to adopt the cultural norms of their new country.
  • Parents spend endless hours guessing whether their infant is crying because it’s hungry, needs a fresh diaper, or just felt like screaming.
  • Newbies to a field need to absorb jargon to keep up, or experts need to slow themselves down and explain it.
  • There’s a whole chain of schools that teach poor, mostly minority students business social norms, by which they mean white-middle-class norms.  There is no school teaching white middle class kids how to use the habitual be properly.
  • Crucial Conversations/Non-Violent Communication/John Gottman’s couples therapy books/How to Talk so Kids Will Listen and Listen so Kids Will Talk are all training for interpretive labor.
  • Graeber himself is talking about interpretative labor in the context of bureaucratic forms, which can simultaneously dump a lot of interpretative labor on the individual (bank forms don’t negotiate) but alleviate the need to be nice to clerks.
  • Comments in code.

With a few very specific exceptions (accents, certain disabilities, ), interpretive labor flows up the gradient of status or privilege.  This can get pretty ugly

  • People who insist their code is self documenting.
  • Girls are told “snapping your bra means he likes you” and then expected to no longer be mad about it.
  • Bullied kids are told to forgive and forget because their bully “is trying to say they’re sorry”, even after repeated cycles of faux-apologies and bullying.
  • This is more tenuous, but I think there’s a good argument that a lot of the emotional abuse on the estranged parent boards comes from parents expecting unfair amounts of interpretive labor from their children, adult or minor.
  • Fundamentalist husband expects his wife to know his emotions and correct for them while he actively hides the emotion from himself.
  • A paraphrased quote from Mothers of Invention: A woman’s house slave has run away, greatly increasing the amount of work she has to do herself.   She writes in her diary “Oh, if only she could think of things from my point of view, she never would let me suffer so.”
  • Poor people are more empathetic than rich people.

I think a large part of the anger around the concept of trigger warnings is related to interpretive labor.  It shifts the burden from traumatized listeners to protect themselves or calm themselves down, to speakers to think ahead if something they are about to say could be upsetting.  That’s work.  Speaking without thinking is much easier.  Like, stupidly easy.  Ditto for Christians who feel they’re being oppressed when they’re asked to consider that not everyone has the same beliefs.  That is way more work than assuming they do.

How does this relate to altruism?  Charity generally flows down the status/privilege gradient, especially from rich to poor.  If the givers don’t consciously change the rules, they will end up demanding large amounts of interpretive labor from their beneficiaries, and do less good in the process.  Arguably this is what’s happening when Heifer International gives people livestock and they immediately sell it- the rich people decided what to give without sufficient input from the poor people they were giving it to, and the poor people had to do extra work to translate it into something they want.  Or this post on Captain Awkward, from a woman trying to teach her tutoring volunteers to not be racist.

EDIT 9/7/18: I think I in appropriately conflated two different situations in this post: situations where interpretive labor closes the whole gap (e.g. understanding an accent), and things where even after correct interpretation there is still a problem. The problem in that bullying example isn’t just that the victim doesn’t understand how the bully wants to apologize, it’s that the bully is going to keep bullying.

Inclusivity is a Trade Off

[Content Note: talking about talking about sexual assault][not a typo]

A few years ago I was an extremely active member of a martial arts studio.  Martial arts has risks, and this school chose to take more than the bare minimum- sparring involved head shots and take downs.  I was willing to accept the risks of this with most people at the dojo, because I knew they were acting with my safety in mind and the risks were worth the benefits, but there was one guy, Snotlout, who did not pass that test.

An artist's rendering of Snotlout
Snotlout is a psuedonym

Where most people will aim to hit near your face, so a mistake means you get a tap, Snotlout aimed to hit you, and the failure mode was hitting your face really hard.  He took blind kicks at full power and blamed you for not getting out of the way.  He once flipped a child flat on their back and his first concern was letting us know how lightly he had touched them.

The school wouldn’t kick him out, wouldn’t even really place restrictions on him.  When I complained to the de facto leadership it was always redirected to what I could do to take care of myself, but when I did so (e.g. insisting on slow motion sparring), I got push back from other de facto leadership.  No one would kick him out or place the necessarily level of restrictions on him, apparently out of fear those restrictions might drive him to leave.  If I really pushed, the people who would talk to me about it would say that they or someone they loved was that dangerous when they came in, and they wanted to give that guy the same chance.  Which is a beautiful thought, except that I know at least three people for whom he was a contributing factor in leaving the school.  Where was the inclusivity for us?*

It's not only morally wrong that Astrid has to keep fighting off Snotlout, it's bad for the tribe's survival.  Between the two of them, she's clearly the one to bet on.
Where would Berk be if Snotlout had driven out Astrid?

Last year Seattle Effective Altruists had a member who brought up sexual assault a lot, in ways that made it clear it was personally relevant to her.  This made me really uncomfortable, but I was aware of how often rape victims are silenced and how damaging that is, so I didn’t say anything.  What occurred to me much later was that statistically there was at least one other victim of sexual assault in room, probably more, and they might find also find it uncomfortable.**  The choice wasn’t “do I silence this rape victim or not?”, it was “who do I make/let be uncomfortable?”, even if I didn’t know who the other person was.  Obviously a trauma victim discussing work with personal meaning to them is in no way equivalent to a jackass endangering people’s safety in order to prove how awesome he is, but that is part of my point: even actions with very good motivations have costs.

Back to martial arts.  Notice that I said de facto leadership?  The problem wasn’t that someone calculated Snotlout vs. [me, the two people I know about, and unknown number of others he drove away] and chose him.  It was that no one did the calculation and no one was responsible for making sure it was done.  There wasn’t even anyone I could negotiate with to ensure my personal safety; a plan I worked out with one senior student would be publicly ridiculed by another.*** No one had ownership of student safety so there was no one to turn a pile of individual complaints into “wow, that guy is dangerous and we should do something”.  To this day I’m not sure why “we have to be welcoming” meant “you have to let him hit you in the face over and over”, and everyone I asked described it as a decision made by someone else.  I don’t even think this worked out particularly well for that guy, because while no one was willing to restrict him, a lot of people would have been happier if he just left, and it showed up in petty things like him never quite getting added to the parking mailing list.  Eventually, after driving out who knows how many people, he screwed up so badly the school had to put severe restrictions on him.  He never came back.

In the real world bullies rarely improve their behavior without seeing its consequences.
An unanticipated struggle to find parking rarely inspires the kind of self reflection that leads to redemption or dragon riding.

Back to EA.  My eventual solution to the “how to talk about rape” problem was to simultaneously ask the woman to tone down the sexual assault talk in meetings where it wasn’t relevant and host her own meeting on the topic.  Unfortunately she left for other reasons before I could implement this.  But if I’d had the chance, it only would have worked because I was empowered as an organizer to do both of those things.  If I’d approached her as a peer, the request to limit talk about sexual assault would have had less heft, and a dedicated meeting would have been a suggestion, not an offer.  But it probably wouldn’t even have gone that far, because it wouldn’t have felt like my place to do it.  That leaves the hypothetical assault victim that didn’t want to constantly hear about rape to defend themselves by approaching her directly, and possibly requiring they disclose their history to see results, which they shouldn’t have to do.  In order to be truly welcoming to them**** , someone had to proactively make the space safe.

There have been other, less fraught trade offs.  One person’s friendly debate is another’s attack, and a third person’s derail.   I think one of my major contributions to the group has been not the decisions we made on these (although those were awesome), but that we made decisions at all, and worked out how to implement them.

If you are an organizer, for EA or something else, these are my recommendations:

  • Have a small, identifiable group with whom the buck ultimately stops.  Individual meetings in Seattle are run mostly on a who-is-excited-about-this system, but there are three people explicitly in charge of the administrative stuff, including disputes.
  • Make explicit decisions about your norms, share them, and enforce them.
  • Explicit is not the same as fixed.  I’m extremely excited about our plans to experiment with different norms at specific meetings, even if some of the norms would make me miserable as a participant.  Not every meeting needs to be for every person.
  • There’s a fine line between overpreparing and sticking your head in the sand until something blows up.  Some of our best decisions are “let X keep going unless Y happens, and then figure out a plan.”

*See also: Geek Social Fallacy #1.

**Much later still I would learn I was right.

***10 minutes before the same guy started quoting The Gift of Fear on listening to your instincts, and specifically leaving situations where you felt afraid.  I walked out.

****Or people who were uncomfortable talking about sexual assault for other reasons, or people who just wanted to talk about the planned topic.

Links 5/22/15

Effective Social Justice Interventions: this is a great example of using EA as a technique to address areas the EA-as-philosophy sphere hasn’t touched.

The Last Day of Her Life:  a psychology researcher’s decision to and process of ending her life as her Alzheimer’s progresses.   Fun fact: state-sanctioned euthanasia requires you be mentally competent and have less than six months to live.  Alzheimer’s patients are mentally incompetent years before they die of the disease.

The (crime-related) Broken Window Theory states that low level visible crime (graffiti, litter) leads to more crime, of all varieties. It is most famous for being Rudy Guilani’s method for reducing crime in New York City.  My understanding was that that had been debunked, and NYC’s drop is crime was caused mostly by demographic trends.  But some researchers did some fairly rigorous tests of it and it held up.  Caveat: they tested visible crime’s evidence on other crimes of similar magnitude, not escalations like theft.

This week’s “beautiful theory killed by an ugly gang of facts” award goes to the meditation chapter of The Willpower Instinct, which promises fantastic benefits from the very beginning.  In fact it says that meditating badly is in some ways better for you than meditating well, because it is the practice of refocusing yourself after you become distracted that is so beneficial.  Unfortunately none of the studies cited show that exact things, and what they do show is a small effect on a noisy variable, in a small sample.

[I don’t want to be too hard on The Willpower Instinct.  It encourages you to do your own experiments and stick with what works, I found some of it helpful, and it’s good for getting yourself into a willpower mindset.  It’s just scientifically weaker than it would have you believe.]

Sine Rider: if xkcd was a video game.

Map of Open Spaces in Effective Altruism

Effective altruism is really extraordinarily good at telling people where to give money, and pretty okay at telling people how to create less animal suffering.  Guidance on how to do anything else is substantially more opaque, because EA discourages a lot of traditional volunteering (at least as a way of improving the world.  As a hobby most people are still okay with it).  That’s a shame, because there’s a lot left to do.

There are an enormous number of unsolved problems in effective altruism, and philanthropy in general.  And there’s actually a fair amount of support for you, if you want to research or attempt a solution.  But the support is not very discoverable.   A lot of the information spreads via social osmosis, and if you’re more than two degrees out from one of the big EA hubs the process is slow and leaky.   It’s not always obvious from the outside how approachable many people and organizations in EA are, or what problems are waiting for solutions.  But once you have that knowledge, it’s really hard to remember what it was like when you didn’t, which makes it hard to figure out what to do about the problem.

This is my attempt to address that.  Below I have listed info on the major EA organizations, with emphasis on what problems they are interested in, how to learn more about them, and what kind of contact they encourage.   I would be surprised if this was enough to enable someone to go from 0-to-implementation on its own, but my hope is that it will provide some scaffolding that speeds up the process of learning the rest of it.

Institutions: did I get something about you wrong?  Miss you entirely?  Please let me know and I will update, I want this to be as accurate as possible.

General/Misc

  • GiveWell ‘s most public work focuses on identifying the best charities working on third world poverty, but they have spun off a subgroup called the Open Philanthropy Project.  OPP investigates not just specific interventions, but causes, trying to determine which areas have the crucial combination of importance, tractability, and neglected-ness.  The good news is that you can read about it in detail on their blog.  The bad news is that you can read about a great many things in a great deal of detail on their blog, so if you’re looking for a particular thing it can be hard to find.  To that end, here are some links to get you started
  • Centre for Effective Altruism has spun off or collaborated with a number of important projects in EA: Giving What We Can, 80,000 hours, Animal Charity Evaluators, Life You Can Save, Global Philanthropy Project, EA Ventures…  all of which I have included in this document.   Their current plan appears to be that, plus some outreach.
  • EA Ventures is a new agency dedicated to matching up people with EA projects, people with money, and people with relevant skills.  EAV’s official areas of interest are the effective altruism trinity: global poverty, animal suffering, and existential risk, and a lot of the things on the list of community suggestions they solicited fall into those three + meta-EA work.   And yet, the list is almost 10 pages long.  Even within those spaces (one of which the rest of the world already looks at pretty closely), there are a lot of ideas.  I’m also heartened to see “mental welfare” as a category, even if it’s empty.  Next time, EA Ventures, next time.
  • .impact started as a combination of “The Volunteer Committee to Spread and Improve Effective Altruism” and a platform to publicize and network for your own projects.  They have just started branching out into more direct support with a slush fund to help people put the finishing touches on their projects.  If you want to get involved formally, there are biweekly meetings with agendas and people leaving with action items and everything, but you’re also encouraged to just make a wiki page for your project and announce it on the facebook group.  Presumably because you’re looking for comments or help, but it doesn’t obligate you to the formal group in any way.  They maintain a list of projects, most of which are related to movement building, but Tom Ash has confirmed that’s a matter of circumstance, not policy.  This seems like a great resource if you want to invest some time, but not necessarily a whole job’s worth of time, in EA, or if you’re looking for learning projects.  One thing I particularly appreciate about the list is that it calls out useful practice problems; if you’re going to learn Ruby Python, it might as well be while patching up effective-altruism.com.
  • Technically out of scope but potentially still useful: If you have an idea for an EA Project and need some advice or particular expertise, check out skillshare, which is a more direct skill matching website.
  • Also just slightly out of scope: 80,000 Hours, an organization dedicated to helping people choose the most effective career for them.  The input they most solicit is career decision case studies, but they also take new ideas through their forum.
  • Animal Charity Evaluators is a good example of a narrow cause focus still having a lot of room for innovation.   Their planned research includes the very specific (“how does viewing a video on animals affect diet in the medium term”), but also the very broad (“How to measure animal welfare?”, which is itself at least five different questions, and “How have other social justice movements done things?  How would those things work for us?”, which is several graduate thesises all on its own).   They actively encourage people with ideas in these spheres to contact them.
  • Your local EA org. You can find these on EAhub, or Meetup, or Facebook.  I can only speak authoritatively about Seattle’s, where most of the work is “coordinate and talk at meetings”, and I think we do a great job of letting people do as much of that as they want without pressuring them to do more.  Also, it is a great place to discuss economics with people who aren’t assholes.

Existential Risk

I struggled a bit writing this section.  The whole point of this exercise is helping people figure out where their idea falls in EA and who might want to hear them.  Existential risk is pretty much the definition of unknown unknowns, and while I understand the broad strokes, I’m not at all confident I can place boundaries without cutting off interesting and relevant work.  The best I can do is tell you where to look.

  • Machine Intelligence Research Institute: What happens when we create self-improving artificial intelligence? How do we make sure a small difference in values doesn’t lead to something undesirable?  It’s a dense subject, but MIRI makes it as accessible as it can, with a full page devoted to how to how to learn more (in great detail) and how to work with them.  I know at least one person has followed their idea for a journal club because they did it in Seattle.
  • Future of Humanity Institute (Oxford).  FHI is an academic institute dedicated to academic investigation of big picture questions.  You know how humans have a tendency to just apply new technology and work out the consequences by seeing them?  FHI is attempting to get ahead of that.  They’re concerned about malicious AI, but also the more mundane potential problems like technology induced systemic unemployment and Big Brother.    You can read about their research here.  They appear to be operating on academic-style collaboration rules, which mean you have to talk to the individual person you want to work with.
  • Global Priorities Project is a collaboration between FHI and CEA, and its goal is to bring their academic work to governments and industry.   It focuses specifically on prioritization of competing proposals, including research into how to prioritize when there are so very many unknowns.  You can read about their research here.  They have an active interest in collaboration or helping people (especially government and industry) use their results.
  • Center for Study of Existential Risk (Cambridge).  Did you expect Cambridge to let Oxford have something they didn’t?  CSER is attempting to develop a general structural framework for evaluating and mitigating extreme risk regardless of field.  And malicious AI.  Per Sean Holden’s comments on EA forum, they are not currently looking for volunteers but if you want to be considered in the future you can e-mail admin@cser.org with your availability and the specifics of the skills you are offering, or keep your eyes open for requests on EA Facebook group/LessWrong/EA Forum itself.
  • Future of Life Institute:  Organizations that focus on existential risk tend to be affiliated with big institutions, which tells you something about the problem space.  In contrast, FLI is an all volunteer org and has open applications for both volunteers and grants on their website (although the deadline has passed for the first round of grants).  Their immediate focus appears to also be malicious AI, but they’re also interested in bio-, nano-, and nuclear tech, environmental catastrophes, and ???.

Fundraising

This is a super important category that I am so, so glad other people are handling, because asking people for money is not in my core skill set.  Left to my own devices I would do research (you know, like this) and then hope money appears.  In my defense, that’s working very well for GiveWell.  But I’m really glad there are other people working on more direct, asking-for-money type projects.

  • Giving What We Can is dedicated to upping and maintaining the level of commitment of people already into effective altruism, primarily asking people to pledge to give 10% of their income to charity forever.  The GWWC pledge originally specified 3rd world poverty charities only, but now includes whatever the signer believes will be most effective.  If you want to get involved with GWWC, the obvious thing to do is sign the pledge.  If you want to do more than that you can found or contribute to your local GWWC chapter.  They’re also open to volunteers in other areas.  They offer charity recommendations, which are based primarily but not exclusively on GiveWell’s work.
  • Life You Can Save: takes a different tact, offering a pledge for a much smaller amount (1% of income, or personal best) but marketing it to a wider variety of people.   This may be working, since 17x as many people have taken the LYCS pledge as the GWWC pledge (although we don’t know which moves more money without knowing their incomes).  Their primary request is for people to run Giving Games, but they will also at least talk to people who want to volunteer in other ways.
  • Charity Science is an effective altruist non-profit that focuses on raising money for GiveWell-recommended charities by exploring any fundraising methods. There are several opportunities for individuals to raise money with them, via things like birthday fundraisers or the $2.50/day challenge.  This is where I would send people that want to invest their time in EA on a well defined path.  They also research the effectiveness of various methods of fundraising, and solicit volunteer assistance, which is a much more INTJ-friendly project.   You can get more info via their newsletter.  If you have an idea for a new kind of fundraiser these are the people I would approach.  Specifically I would talk to Tom Ash, because he is exceptionally open to talking to people.

Thanks to John Salvatier and Jai Dhyani for comments on earlier drafts of this post, Ben Hoffman for the original idea, and Tom Ash for answering questions on .impact and Charity Science.