What do I do now?

I used to think poverty/global health was the most important cause, because it focused on the worst off people.  Being around effective altruism exposed me to the argument that existential risk- things that might wipe out the entire planet- and catastrophic risk- things that leave life but destroy civilization- are more important.  I never had a good argument against them, but it was basically impossible to have a good argument against them, and I’m weirdly easy to talk into things.  Late in my tenure at my last (poverty-focused) job, I was no longer certain that poverty was most important, but I was sure that I was in a uniquely good position to work on it, and that finishing the third best thing was better than half finishing the first best.

When I lost my job, I lost that excuse.  I had to decide what was actually most important (modulo what was tractable to me personally).  I was hoping the recent Effective Altruism Global conference would provide clarity on this, but mostly it did not.  I’ve been exposed to the EA arguments a lot; paying more attention and hearing slightly better versions was not going to change anything.  What I need to do is find other sources of information and investigate what they think is most important, so I’m looking at genuinely new information.  I don’t think I can be confident in my decision without that.

I’m currently in the market for cause areas to investigate, but more importantly potential sources of new cause areas.  What are the equivalents of EAG that would expose me to dramatically different ideas about what is most important?

Potential cause areas, gathered from who knows:

  • Racism in America
  • Aging
  • Nutrition
  • Make people socially smarter
  • Education
    • Gifted education
    • Low-achiever education
  • Medicine
  • Mental/emotional health
  • Stupid government regulations
    • Housing
    • Medicine
    • Regulatory capture
    • Criminal justice reform
  • X-risk
    • Artificial intelligence
    • Disease
    • Nuclear war
  • Getting us off the planet
  • Universal Basic Income
  • ADDED LATER
  • Baumol’s cost disease
  • Conventional warfare
  • FDA’s cranial-rectal insertion
  • Sleep deprivation

Potential EA-equivalents, gathered from a week of paying attention and looking for such things:

  • Startup Societies Foundation
  • Long Now Foundation
  • Foresight Institute

What I am looking for now:

  • Suggestions on other cause areas, especially if they come with lots of information on said cause area.
  • Suggestions of other groups to investigate.
  • People to listen and help me work out my thoughts on particular cause areas.
  • Tools that will help me think about this more clearly.

UPDATE 8/18

Daniel asks: “It might be useful to turn the question around. Assume your fellow EAs are doing their calculations well, and ask what your comparative advantages are, then look for high-impact ways to apply those.

It’s the same search problem, just starting from the opposite end, where the branching factor is lower.”

 

This is what I’ve been doing, I feel very strongly that now is top-down time.
Additionally, I don’t trust EA calculations.  There is no way that 3 + meta is the correct number of causes.

6 thoughts on “What do I do now?”

  1. It might be useful to turn the question around. Assume your fellow EAs are doing their calculations well, and ask what your comparative advantages are, then look for high-impact ways to apply those.

    It’s the same search problem, just starting from the opposite end, where the branching factor is lower.

  2. Topher Hallquist offered a good counterpoint to the argument that EA’s historical selection of cause areas is likely arbitrary/wrong. Obviously we still want to take good opportunities if they’re outside of those areas (and OpenPhil funds such opportunities, most notably in U.S. criminal justice reform), but I think there are decent reasons to suspect that most of the big wins will be in those areas.

  3. I don’t like the current use of the term “cause” in EA, and I don’t like the way the landscape is divided into “3 + meta” causes. I’m pretty convinced that most of the accessible value is in the long-term future, and I think one of the biggest levers on the long-term future (taking into account the fact that it’s relatively neglected) will be artificial intelligence. So my advice would be to apply an AI lens to everything. Want to promote democracy? Think about how AI can improve democracy. Want to help animals? Think about how AI can help animals? Want to focus on aging or nutrition? Think about what AI can do for them. I actually think there’s quite a lot of low-hanging fruit for the imaginative.

  4. If you are interested in newsworthy efforts around to tackling the multifaceted harms of age related damage/dysfunction then fightaging.org is worth sticking in your reading feed. It’s how I personally found out that there are yearly charitable donation-matching drives for underfunded organisations like SENS.

    Here’s a post with the footnotes of 2016 as a starting point: https://www.fightaging.org/archives/2016/12/a-look-back-at-2016-in-longevity-science/

Comments are closed.

%d bloggers like this: