Map of Open Spaces in Effective Altruism

Effective altruism is really extraordinarily good at telling people where to give money, and pretty okay at telling people how to create less animal suffering.  Guidance on how to do anything else is substantially more opaque, because EA discourages a lot of traditional volunteering (at least as a way of improving the world.  As a hobby most people are still okay with it).  That’s a shame, because there’s a lot left to do.

There are an enormous number of unsolved problems in effective altruism, and philanthropy in general.  And there’s actually a fair amount of support for you, if you want to research or attempt a solution.  But the support is not very discoverable.   A lot of the information spreads via social osmosis, and if you’re more than two degrees out from one of the big EA hubs the process is slow and leaky.   It’s not always obvious from the outside how approachable many people and organizations in EA are, or what problems are waiting for solutions.  But once you have that knowledge, it’s really hard to remember what it was like when you didn’t, which makes it hard to figure out what to do about the problem.

This is my attempt to address that.  Below I have listed info on the major EA organizations, with emphasis on what problems they are interested in, how to learn more about them, and what kind of contact they encourage.   I would be surprised if this was enough to enable someone to go from 0-to-implementation on its own, but my hope is that it will provide some scaffolding that speeds up the process of learning the rest of it.

Institutions: did I get something about you wrong?  Miss you entirely?  Please let me know and I will update, I want this to be as accurate as possible.


  • GiveWell ‘s most public work focuses on identifying the best charities working on third world poverty, but they have spun off a subgroup called the Open Philanthropy Project.  OPP investigates not just specific interventions, but causes, trying to determine which areas have the crucial combination of importance, tractability, and neglected-ness.  The good news is that you can read about it in detail on their blog.  The bad news is that you can read about a great many things in a great deal of detail on their blog, so if you’re looking for a particular thing it can be hard to find.  To that end, here are some links to get you started
  • Centre for Effective Altruism has spun off or collaborated with a number of important projects in EA: Giving What We Can, 80,000 hours, Animal Charity Evaluators, Life You Can Save, Global Philanthropy Project, EA Ventures…  all of which I have included in this document.   Their current plan appears to be that, plus some outreach.
  • EA Ventures is a new agency dedicated to matching up people with EA projects, people with money, and people with relevant skills.  EAV’s official areas of interest are the effective altruism trinity: global poverty, animal suffering, and existential risk, and a lot of the things on the list of community suggestions they solicited fall into those three + meta-EA work.   And yet, the list is almost 10 pages long.  Even within those spaces (one of which the rest of the world already looks at pretty closely), there are a lot of ideas.  I’m also heartened to see “mental welfare” as a category, even if it’s empty.  Next time, EA Ventures, next time.
  • .impact started as a combination of “The Volunteer Committee to Spread and Improve Effective Altruism” and a platform to publicize and network for your own projects.  They have just started branching out into more direct support with a slush fund to help people put the finishing touches on their projects.  If you want to get involved formally, there are biweekly meetings with agendas and people leaving with action items and everything, but you’re also encouraged to just make a wiki page for your project and announce it on the facebook group.  Presumably because you’re looking for comments or help, but it doesn’t obligate you to the formal group in any way.  They maintain a list of projects, most of which are related to movement building, but Tom Ash has confirmed that’s a matter of circumstance, not policy.  This seems like a great resource if you want to invest some time, but not necessarily a whole job’s worth of time, in EA, or if you’re looking for learning projects.  One thing I particularly appreciate about the list is that it calls out useful practice problems; if you’re going to learn Ruby Python, it might as well be while patching up
  • Technically out of scope but potentially still useful: If you have an idea for an EA Project and need some advice or particular expertise, check out skillshare, which is a more direct skill matching website.
  • Also just slightly out of scope: 80,000 Hours, an organization dedicated to helping people choose the most effective career for them.  The input they most solicit is career decision case studies, but they also take new ideas through their forum.
  • Animal Charity Evaluators is a good example of a narrow cause focus still having a lot of room for innovation.   Their planned research includes the very specific (“how does viewing a video on animals affect diet in the medium term”), but also the very broad (“How to measure animal welfare?”, which is itself at least five different questions, and “How have other social justice movements done things?  How would those things work for us?”, which is several graduate thesises all on its own).   They actively encourage people with ideas in these spheres to contact them.
  • Your local EA org. You can find these on EAhub, or Meetup, or Facebook.  I can only speak authoritatively about Seattle’s, where most of the work is “coordinate and talk at meetings”, and I think we do a great job of letting people do as much of that as they want without pressuring them to do more.  Also, it is a great place to discuss economics with people who aren’t assholes.

Existential Risk

I struggled a bit writing this section.  The whole point of this exercise is helping people figure out where their idea falls in EA and who might want to hear them.  Existential risk is pretty much the definition of unknown unknowns, and while I understand the broad strokes, I’m not at all confident I can place boundaries without cutting off interesting and relevant work.  The best I can do is tell you where to look.

  • Machine Intelligence Research Institute: What happens when we create self-improving artificial intelligence? How do we make sure a small difference in values doesn’t lead to something undesirable?  It’s a dense subject, but MIRI makes it as accessible as it can, with a full page devoted to how to how to learn more (in great detail) and how to work with them.  I know at least one person has followed their idea for a journal club because they did it in Seattle.
  • Future of Humanity Institute (Oxford).  FHI is an academic institute dedicated to academic investigation of big picture questions.  You know how humans have a tendency to just apply new technology and work out the consequences by seeing them?  FHI is attempting to get ahead of that.  They’re concerned about malicious AI, but also the more mundane potential problems like technology induced systemic unemployment and Big Brother.    You can read about their research here.  They appear to be operating on academic-style collaboration rules, which mean you have to talk to the individual person you want to work with.
  • Global Priorities Project is a collaboration between FHI and CEA, and its goal is to bring their academic work to governments and industry.   It focuses specifically on prioritization of competing proposals, including research into how to prioritize when there are so very many unknowns.  You can read about their research here.  They have an active interest in collaboration or helping people (especially government and industry) use their results.
  • Center for Study of Existential Risk (Cambridge).  Did you expect Cambridge to let Oxford have something they didn’t?  CSER is attempting to develop a general structural framework for evaluating and mitigating extreme risk regardless of field.  And malicious AI.  Per Sean Holden’s comments on EA forum, they are not currently looking for volunteers but if you want to be considered in the future you can e-mail with your availability and the specifics of the skills you are offering, or keep your eyes open for requests on EA Facebook group/LessWrong/EA Forum itself.
  • Future of Life Institute:  Organizations that focus on existential risk tend to be affiliated with big institutions, which tells you something about the problem space.  In contrast, FLI is an all volunteer org and has open applications for both volunteers and grants on their website (although the deadline has passed for the first round of grants).  Their immediate focus appears to also be malicious AI, but they’re also interested in bio-, nano-, and nuclear tech, environmental catastrophes, and ???.


This is a super important category that I am so, so glad other people are handling, because asking people for money is not in my core skill set.  Left to my own devices I would do research (you know, like this) and then hope money appears.  In my defense, that’s working very well for GiveWell.  But I’m really glad there are other people working on more direct, asking-for-money type projects.

  • Giving What We Can is dedicated to upping and maintaining the level of commitment of people already into effective altruism, primarily asking people to pledge to give 10% of their income to charity forever.  The GWWC pledge originally specified 3rd world poverty charities only, but now includes whatever the signer believes will be most effective.  If you want to get involved with GWWC, the obvious thing to do is sign the pledge.  If you want to do more than that you can found or contribute to your local GWWC chapter.  They’re also open to volunteers in other areas.  They offer charity recommendations, which are based primarily but not exclusively on GiveWell’s work.
  • Life You Can Save: takes a different tact, offering a pledge for a much smaller amount (1% of income, or personal best) but marketing it to a wider variety of people.   This may be working, since 17x as many people have taken the LYCS pledge as the GWWC pledge (although we don’t know which moves more money without knowing their incomes).  Their primary request is for people to run Giving Games, but they will also at least talk to people who want to volunteer in other ways.
  • Charity Science is an effective altruist non-profit that focuses on raising money for GiveWell-recommended charities by exploring any fundraising methods. There are several opportunities for individuals to raise money with them, via things like birthday fundraisers or the $2.50/day challenge.  This is where I would send people that want to invest their time in EA on a well defined path.  They also research the effectiveness of various methods of fundraising, and solicit volunteer assistance, which is a much more INTJ-friendly project.   You can get more info via their newsletter.  If you have an idea for a new kind of fundraiser these are the people I would approach.  Specifically I would talk to Tom Ash, because he is exceptionally open to talking to people.

Thanks to John Salvatier and Jai Dhyani for comments on earlier drafts of this post, Ben Hoffman for the original idea, and Tom Ash for answering questions on .impact and Charity Science.

4 thoughts on “Map of Open Spaces in Effective Altruism

  1. I would add the Global Catastrophic Risk Institute ( to X risk. It seeks to prioritize interventions across all the risks that could significantly harm civilization in the long term. Disclosure: I am a research associate there.

Comments are closed.