I am really, really trying to stay away from takedown pieces, but multiple books on the importance of practice and irrelevance of talent cite Ericsson, Krampe, and Tesch-Romer’s “The Role of Deliberate Practice in the Acquisition of Expert Performance” (PDF) as proof of the importance of deliberate practice (and the irrelevance of innate talent, but that’s a different issue). This study compared the best, excellent, and good violinists, and their methods of study. They claim the most accomplished students had more accumulated hours of practice than the least, and while they all currently practiced the same amount, the most accomplished students spent more of it in deliberate practice, thus proving its importance.
Let me make this quick: the study had an n of 30, spread out over three treatment groups, all of which were recruited from a music school, and the measure of success was not “successful career” but “professor prediction of successful career”. So if the sample size was big enough to prove anything (which it wasn’t) and the sample was representative of the population (which it wasn’t), they still only proved that professors like people who study a lot. We’re not even getting into how they estimated cumulative lifetime hours of practice for people who started picked up an instrument at four years old. Having this be a foundational study for multiple books is embarrassing.
My most recent encounter with this study was from The Talent Code, which cites many studies showing the best people in a field engage in deliberate practice and zero experiments showing people improved after being taught deliberate practice. I looked on google scholar and found mostly papers on how to deliberately practice teaching, not teach deliberate practice, and a few that taught using it (without a control group). Deliberate practice looks extremely plausible and I plan on applying more of it myself, but seriously, how has this idea been around for 20 years and no one has done the most basic experiments on it?
Effective altruism is really extraordinarily good at telling people where to give money, and pretty okay at telling people how to create less animal suffering. Guidance on how to do anything else is substantially more opaque, because EA discourages a lot of traditional volunteering (at least as a way of improving the world. As a hobby most people are still okay with it). That’s a shame, because there’s a lot left to do.
There are an enormous number of unsolved problems in effective altruism, and philanthropy in general. And there’s actually a fair amount of support for you, if you want to research or attempt a solution. But the support is not very discoverable. A lot of the information spreads via social osmosis, and if you’re more than two degrees out from one of the big EA hubs the process is slow and leaky. It’s not always obvious from the outside how approachable many people and organizations in EA are, or what problems are waiting for solutions. But once you have that knowledge, it’s really hard to remember what it was like when you didn’t, which makes it hard to figure out what to do about the problem.
This is my attempt to address that. Below I have listed info on the major EA organizations, with emphasis on what problems they are interested in, how to learn more about them, and what kind of contact they encourage. I would be surprised if this was enough to enable someone to go from 0-to-implementation on its own, but my hope is that it will provide some scaffolding that speeds up the process of learning the rest of it.
Institutions: did I get something about you wrong? Miss you entirely? Please let me know and I will update, I want this to be as accurate as possible.
GiveWell ‘s most public work focuses on identifying the best charities working on third world poverty, but they have spun off a subgroup called the Open Philanthropy Project. OPP investigates not just specific interventions, but causes, trying to determine which areas have the crucial combination of importance, tractability, and neglected-ness. The good news is that you can read about it in detail on their blog. The bad news is that you can read about a great many things in a great deal of detail on their blog, so if you’re looking for a particular thing it can be hard to find. To that end, here are some links to get you started
Centre for Effective Altruism has spun off or collaborated with a number of important projects in EA: Giving What We Can, 80,000 hours, Animal Charity Evaluators, Life You Can Save, Global Philanthropy Project, EA Ventures… all of which I have included in this document. Their current plan appears to be that, plus some outreach.
EA Ventures is a new agency dedicated to matching up people with EA projects, people with money, and people with relevant skills. EAV’s official areas of interest are the effective altruism trinity: global poverty, animal suffering, and existential risk, and a lot of the things on the list of community suggestions they solicited fall into those three + meta-EA work. And yet, the list is almost 10 pages long. Even within those spaces (one of which the rest of the world already looks at pretty closely), there are a lot of ideas. I’m also heartened to see “mental welfare” as a category, even if it’s empty. Next time, EA Ventures, next time.
.impact started as a combination of “The Volunteer Committee to Spread and Improve Effective Altruism” and a platform to publicize and network for your own projects. They have just started branching out into more direct support with a slush fund to help people put the finishing touches on their projects. If you want to get involved formally, there are biweekly meetings with agendas and people leaving with action items and everything, but you’re also encouraged to just make a wiki page for your project and announce it on the facebook group. Presumably because you’re looking for comments or help, but it doesn’t obligate you to the formal group in any way. They maintain a list of projects, most of which are related to movement building, but Tom Ash has confirmed that’s a matter of circumstance, not policy. This seems like a great resource if you want to invest some time, but not necessarily a whole job’s worth of time, in EA, or if you’re looking for learning projects. One thing I particularly appreciate about the list is that it calls out useful practice problems; if you’re going to learn Ruby Python, it might as well be while patching up effective-altruism.com.
Technically out of scope but potentially still useful: If you have an idea for an EA Project and need some advice or particular expertise, check out skillshare, which is a more direct skill matching website.
Animal Charity Evaluators is a good example of a narrow cause focus still having a lot of room for innovation. Their planned research includes the very specific (“how does viewing a video on animals affect diet in the medium term”), but also the very broad (“How to measure animal welfare?”, which is itself at least five different questions, and “How have other social justice movements done things? How would those things work for us?”, which is several graduate thesises all on its own). They actively encourage people with ideas in these spheres to contact them.
Your local EA org. You can find these on EAhub, or Meetup, or Facebook. I can only speak authoritatively about Seattle’s, where most of the work is “coordinate and talk at meetings”, and I think we do a great job of letting people do as much of that as they want without pressuring them to do more. Also, it is a great place to discuss economics with people who aren’t assholes.
I struggled a bit writing this section. The whole point of this exercise is helping people figure out where their idea falls in EA and who might want to hear them. Existential risk is pretty much the definition of unknown unknowns, and while I understand the broad strokes, I’m not at all confident I can place boundaries without cutting off interesting and relevant work. The best I can do is tell you where to look.
Machine Intelligence Research Institute: What happens when we create self-improving artificial intelligence? How do we make sure a small difference in values doesn’t lead to something undesirable? It’s a dense subject, but MIRI makes it as accessible as it can, with a full page devoted to how to how to learn more (in great detail) and how to work with them. I know at least one person has followed their idea for a journal club because they did it in Seattle.
Future of Humanity Institute (Oxford). FHI is an academic institute dedicated to academic investigation of big picture questions. You know how humans have a tendency to just apply new technology and work out the consequences by seeing them? FHI is attempting to get ahead of that. They’re concerned about malicious AI, but also the more mundane potential problems like technology induced systemic unemployment and Big Brother. You can read about their research here. They appear to be operating on academic-style collaboration rules, which mean you have to talk to the individual person you want to work with.
Global Priorities Project is a collaboration between FHI and CEA, and its goal is to bring their academic work to governments and industry. It focuses specifically on prioritization of competing proposals, including research into how to prioritize when there are so very many unknowns. You can read about their research here. They have an active interest in collaboration or helping people (especially government and industry) use their results.
Center for Study of Existential Risk (Cambridge). Did you expect Cambridge to let Oxford have something they didn’t? CSER is attempting to develop a general structural framework for evaluating and mitigating extreme risk regardless of field. And malicious AI. Per Sean Holden’s comments on EA forum, they are not currently looking for volunteers but if you want to be considered in the future you can e-mail firstname.lastname@example.org with your availability and the specifics of the skills you are offering, or keep your eyes open for requests on EA Facebook group/LessWrong/EA Forum itself.
Future of Life Institute: Organizations that focus on existential risk tend to be affiliated with big institutions, which tells you something about the problem space. In contrast, FLI is an all volunteer org and has open applications for both volunteers and grants on their website (although the deadline has passed for the first round of grants). Their immediate focus appears to also be malicious AI, but they’re also interested in bio-, nano-, and nuclear tech, environmental catastrophes, and ???.
This is a super important category that I am so, so glad other people are handling, because asking people for money is not in my core skill set. Left to my own devices I would do research (you know, like this) and then hope money appears. In my defense, that’s working very well for GiveWell. But I’m really glad there are other people working on more direct, asking-for-money type projects.
Giving What We Can is dedicated to upping and maintaining the level of commitment of people already into effective altruism, primarily asking people to pledge to give 10% of their income to charity forever. The GWWC pledge originally specified 3rd world poverty charities only, but now includes whatever the signer believes will be most effective. If you want to get involved with GWWC, the obvious thing to do is sign the pledge. If you want to do more than that you can found or contribute to your local GWWC chapter. They’re also open to volunteers in other areas. They offer charity recommendations, which are based primarily but not exclusively on GiveWell’s work.
Life You Can Save: takes a different tact, offering a pledge for a much smaller amount (1% of income, or personal best) but marketing it to a wider variety of people. This may be working, since 17x as many people have taken the LYCS pledge as the GWWC pledge (although we don’t know which moves more money without knowing their incomes). Their primary request is for people to run Giving Games, but they will also at least talk to people who want to volunteer in other ways.
Charity Science is an effective altruist non-profit that focuses on raising money for GiveWell-recommended charities by exploring any fundraising methods. There are several opportunities for individuals to raise money with them, via things like birthday fundraisers or the $2.50/day challenge. This is where I would send people that want to invest their time in EA on a well defined path. They also research the effectiveness of various methods of fundraising, and solicit volunteer assistance, which is a much more INTJ-friendly project. You can get more info via their newsletter. If you have an idea for a new kind of fundraiser these are the people I would approach. Specifically I would talk to Tom Ash, because he is exceptionally open to talking to people.
EA Global Summit announced. Consider applying if you’re very into world-improving even if you’re not into effective altruism specifically, they’re reserving a good chunk of tickets for people who aren’t in order to increase thought diversity, which I think is an excellent idea.
Atul Gawande on unnecessary medical care. It’s old news that doctors do lots of things to make things actively worse, what I found most depressing is:
we’ve assumed, he says, that cancers are all like rabbits that you want to catch before they escape the barnyard pen. But some are more like birds—the most aggressive cancers have already taken flight before you can discover them, which is why some people still die from cancer, despite early detection. And lots are more like turtles. They aren’t going anywhere. Removing them won’t make any difference
Part of the solution may not be using the same word for “horrible wasting disease” and “a few cells not operating optimally”, because people will do anything to get rid of cancer regardless of the actual risks.
I had always assumed t-shirt based fundraisers raised absolutely nothing, because t-shirts are so expensive. I knew because I ran my old dojo’s zazzle store, and to keep the prices even vaguely plausible I had to keep our profit margin below 10%. Throw in the time to administer them, and they’re basically a wash, right? Turns out, nope t-shirts are extremely profitable. A $4 shirt to let you signal virtue seems like a reasonable trade for $65 cash.
The good news is that the bullshit floor for t-shirt fundraisers is actually much lower than we thought (although bad organizations can still dilute their effectiveness, and cash is still better). I thought the bad news was we were all overpaying for novelty nerd t-shirts, but it turns out prices on those have come down substantially since I left the market. I guess capitalism worked?
This week’s surprisingly well sourced weird-ass science fact from cracked.com is “Among depressed people, sweating is heavily predictive of suicide.” According to them, 97% subjects getting treatment for depression who sweated less in response to loud noises went on to commit suicide, compared with 2% of people with normal sweat response. Those numbers are astonishing- and a sample size of 800, big for a psych study. Not to doubt the website that brought us “6 Fictional Alcoholic Beverages That Actually Get You Drunk“, but I wanted to check the tape on this one.
This was actually a metastudy, combining the results of several other studies. Five different studies with a total of n participants are not nearly as good as a single study with n participants, first because there’s more opportunities to throw out data, second because the experiments rarely have exactly the same set up. A person who scored high-sweating in one set up might score low-sweating in another.
The total of 783 people includes people with bipolar disorder (126), depression (540), and other, which meant either dysthemia (mild depression), or depression AND a personality or adjustment disorder (118). Those are different things. Bipolar patients are at significantly higher risk of suicide than unipolar patients, because they’re more likely to simultaneously have the desire to end their life and the will to act on it. And Other will include people with borderline personality disorder, which is really its own issue.
Only 36 people total completed suicide. The researchers claim for specificity is for completed suicide (violent or not) AND violent attempts, which may occur before or after the study took place. They excluded non-violent suicide attempts out of the belief that non-violent attempts were more likely to have been cries for help. 9% of completed suicides were non-violent where 55% of attempts were, so that part seems fair. On the other hand, predicting the past is not nearly as impressive or useful as predicting the future.
In the discussion section, the authors acknowledge that many of the subjects were selected from mental hospitals where they had been admitted due to a past or feared suicide attempt. This is what’s known as a biased sample. The data is still suggestive of underlying physiological processes, but the specificity/sensitivity numbers can’t be translated to the general population without further work, and in particular I wouldn’t recommend using this as a diagnostic criteria for whether hospital admittance is a good idea.
So it looks like cracked was severely exaggerating the claims of this research, which is too bad. But there does appear to be some effect, which could help us figure out the physiology of depression, which would be extremely useful.
There’s a lot of games that attempt to be educational out there. I break them down into the following categories.
You do the exact same work, but receive stickers or points or badges for doing so.
Example: Kahn Academy badges, arguably all grades. Extra Credits describes an intricate system here.
I was a grade grubber for years, and I’ll admit I still kind of miss the structure of school. But gamification wears off really quickly, and Alfie Kohn has made a career out of arguing that extrinsic rewards are inherently harmful. The one benefit I see in Extra Credit’s system is that it would reinforce students for other students’ performance, cutting down on bullying the smart kids. It may also encourage the strongest students to help the weakest ones. Or it might make everyone hate the weak students or help them cheat so they can get a pizza party. Kids will do a lot for a pizza party. And they may start to resent the smart kids for not helping enough. So I guess I’m against this, but that may stem from years in the worst possible educational environment.
In that same video EC suggests the much less likely to backfire benefits of tailored difficulty curves and immediate feedback. These strike me as much more valuable, but they will mostly fall in another category.
These are games that don’t really teach you anything you could use on a test, and often have fictionalized elements, but do build conceptual fluency, which can make it easier to learn real things later. Pretty much any game set in the real world qualifies for this, but my personal favorite is Oregon Trail for introducing millions of school children to the concept of dysentery.
A lot of the games on Extra Credits’s steam shelf fall into this category. I was initially pretty dismissive of this, but I’ve changed my mind.
There are a lot of reasons that middle class + white children do better at school than poor + minority children, but one of them is the amount and type of knowledge they’re exposed to at home. Poor parents flat out talk to their children less, which gives them less time to transmit knowledge. They’re also less likely to have the kind of knowledge their children will be tested on at school. As Sharon Astyk so heartbreakingly puts it, her foster children needed to be taught how to be read to but had a highly developed internal map of food-containing garbage cans.
There’s no video game for learning to not chew on books. But there are lots of video games with maps. A big part of my 6th grade social studies class was blank map tests, where we would be given a blank map and have to label all the countries. We had a decent teacher, so I suspect this was fluency building and not drilling for drilling’s sake: when we read about Egypt and Greece and Rome, she wanted us to be able to put events in geographical context. I didn’t know where every country was, but I did know, or at least recognize, the names of most countries. This put me strictly ahead of the girl who called Syria “cereal”. When we took tests I only had to put effort into remembering locations, she had to put effort into locations, and names, and possibly what a country was. And it’s really hard to put in that effort when you don’t see a point and this other girl is doing so much better than you without even trying. Carmen San Diego, or any video game with a strong sense of real-world place, could have given her a way to catch up. Even if it wasn’t fun, it wouldn’t have had the same ugh field around it that studying did.
This is related but not identical to what Extra Credits describes as “familiarity builders“, where the goal is basically to make something interesting enough people look up the actual facts on wikipedia later.
Drill and Killers:
These games overlay what they’re trying to teach on top of typical game mechanics. These are more than fluency builders because they use the same skills you’d use on a test or in real life, but they don’t teach you anything new, they just give you practice with what you already know. Examples: Math Blasters, Mario Teaches Typing.
How good these are depends on who you ask. I suspect you need a certain minimal fluency to make them at all fun, which makes the difficulty curve important. And they’re less fun or game-like than the other types on this list. But some things just have to be drilled, and video games are a more fun way to do that than flash cards.
Abstract Skill Builders:
Games that teach useful skills. They would need translation to be anything useful on a test or in real life, but it does build up some part of the brain.
Example: Logical Journey of The Zoombinis, which teaches pattern matching, logical thinking, and arguably set theory.
On one hand, abstract skills like these are very hard to teach. On the other hand, people are very bad at transferring skills from one domain to another (which is why some people can make change just fine but have trouble with contextless arithmetic, or can do arithmetic but not word problems). On the third hand, people are very bad at transferring skills from one domain to another, so if there’s a tool that helps than learn that, it would be very valuable.
You don’t technically have to learn anything to play these games, but you will be rewarded if you do.
Example: Sine Rider. Technically you can get by with guess and check, but you will win faster if you actually understand trigonometric equations.
A step further than skill builders or drillers, these games have both flavor and mechanics based in the subject you are trying to teach. These games are not necessarily complete substitutes for textbooks because it’s hard for them to be comprehensive, but they do completely teach whatever it is they’re trying to teach.
There’s a fair number of horror movies and games where you can only see the ghost with special glasses or a camera. This game lets you live that with your phone camera. In your house. Good luck.
App to help Alzheimer’s patients remember their loved ones. The app is preloaded with descriptions of specific people- the users’ relationship to them, photos of memories, etc. When that person (with their phone, running their own version of the app) gets close enough, the patient’s app will flash that data. The article focuses on how this could delay memory loss, but I think it will also be helpful in alleviating some of the frustration and humiliation of Alzheimer’s.
Patients’ ratings of drugs are negative correlated with doctors’ ratings. There are some non-awful ways this could be true*, and we could resolve this conflict with trials directly comparing drugs. In the particular case of depression there’s one study directly comparing MAOIs to SSRIs. It was funded by the patent holder to one of the most popular SSRIs, and it found them to be equally effective, with SSRIs being safer (which depends a lot on how you weight different problems).
This isn’t the manufacturers’ fault. There’s no reason the manufacturer, who has put a lot of money into developing this drug and all the others that never made it to market, should spend money proving you shouldn’t buy their product. The problem is that drug company-funded research is almost all there is. In any sane world a public health agency would have run this trial long ago.
I’m extra frustrated by this because the FDA makes drug companies spend an enormous amount of money to get their drugs approved, and we still don’t get the data we really need. It’s security theater for medicine.
*e.g. a treatment needs to be rated at least so good by doctors or patients to be prescribed, or for something like depression where patients often need to try multiple drugs before finding one that works, the first line will show more failures even if it’s more effective on average because people who thrive on them never try the drugs that don’t work for them.