In Defense Of The Sunk Cost Fallacy

Dutch disease is the economic concept that if a country is too rich in one thing, especially a natural resource, every other sector of the economy will rot because all available money and talent will flow towards that sector.  Moreover, that sector dominates the exchange rate, making all other exports uncompetitive.*  It comes up in foreign development a lot because charitable aid can cause dutch disease: by paying what the funders would consider a “fair wage”, charities position themselves as by far the best employers in the area.  The best and the brightest African citizens end up chauffering foreigners rather than starting their own businesses, which keeps the society dependent on outside help.  Nothing good comes from having poverty as your chief export.

I posit that a similar process takes place in corporations.  Once they are making too much money off a few major things (Windows, Office, AdWords, SUVs), even an exceptionally profitable project in a small market is too small to notice.  Add in the risk of reputation damage and the fact that all projects have a certain amount of overhead regardless of size, and it makes perfect sense for large companies to discard projects a start up would kill for (RIP Reader).**

That’s a fine policy in moderation, but there are problems with applying it too early.  Namely, you never know what something is going to grow into.  Google search originally arose as a way to calculate impact for academic papers. The market for SUVs (and for that matter, cars) was 0 until someone created it.  If you insist on only going after projects that directly address an existing large market, the best you’ll ever be is a fast follower.***

Simultaneously, going from zero to an enormous, productive project is really, really hard (see: Fire Phone, Google+, Facebook’s not-an-operating-system).  Even if you have an end goal in mind, it often makes sense to start small and iterate.  Little Bets covers this in great detail.  And if you don’t have a signed card from G-d confirming your end goal is correct, progressing in small iterative steps gives you more information and more room to pivot.

More than one keynote at EA Global talked about the importance of picking the most important thing, and of being willing to switch if you find something better.  That’s obviously great in in some cases, but I worry that this hyperfocusing will cause the same problems for us that it does at large companies: a lack of room to surprise ourselves.  For example, take the post I did on interpretive labor.  I was really proud of that post.  I worked hard on it.  I had visions of it helping many people in their relationships.  But if you’d asked at the time, I would have predicted that the Most Effective use of my time was learning programming skills to increase my wage or increase my value in direct work, and that that post was an indulgence.   It never in my wildest dreams occurred to me it would be read by someone in a far better position than me to do something about existential risk and be useful to them in connecting two key groups that weren’t currently talking to each other, but apparently it did.  I’m not saying that I definitely saved us from papercliptopia, but it is technically possible that that post (along with millions of other flaps of butterfly wings) will make the marginal difference.  And I would never have even known it did so except the person in question reached out to me at EA Global.****

Intervention effectiveness may vary by several orders of magnitude, but if the confidence intervals are just as big it pays to add a little wiggle to your selection.  Moreover, constant project churn has its own cost: it’s better to finish the third best thing than have to two half finished attempts at different best things.  And you never know what a 3rd best project will teach you that will help an upcoming best project- most new technological innovations come from combining things from two different spheres (source), so hyperfocus will eventually cripple you.

In light of all that, I think we need to stop being quite so hard on the sunk cost fallacy.  No, you should not throw good money after bad, but constantly re-evaluating your choices is costly and (jujitsu flip) will not always be most efficient use of your resources.  In the absence of a signed piece of paper from G-d, biasing some of your effort towards things you enjoy and have comparative advantage in may in fact be the optimal strategy

Using your own efficiency against you

My hesitation is that I don’t know how far you can take this before it stops being effective altruism and starts being “feel smug and virtuous about doing whatever it is you already wanted to do”- a thing we’re already accused of doing.  Could someone please solve this and report back?  Thanks.

* The term comes from the Dutch economic crash following the discovery of natural gas in The Netherlands.  Current thought is that was not actually Dutch disease, but that renaming the phenomenon after some third world country currently being devastated by it would be mean.

*Simultaneously, developers have become worse predictors of the market in general. Used to be that nerds were the early adopters and if they loved it everyone would be using it in a year (e.g. gmail, smart phones).  As technology and particularly mobile advances, this is no longer true.  Nerds aren’t powerusers for tablets because we need laptops, but tablet poweruser is a powerful and predictive market.  Companies now force devs to experience the world like users (Facebook’s order to use Android) or just outright tell them what to do (Google+).  This makes their ideas inherently less valuable than they were.  I don’t blame companies for shifting to a more user-driven decision making process, but it does make things less fun.

**Which, to be fair, is Microsoft’s actual strategy

***It’s also possible it accomplished nothing, or makes it worse.  But the ceiling of effectiveness is higher than I ever imaged and the uncertainty only makes my point stronger.

IQ Tests and Poverty

Recently I read Poor Economics, which is excellent at doing what it promises: explaining the experimental data we have for what works and does not work in alleviating third world poverty, with some theorizing as to why.  If that sounds interesting to you, I heartily recommend it.  I don’t have much to add to most of it, but one thing that caught my eye was their section on education and IQ tests.

In Africa and India, adults believe that the return to education is S-shaped (meaning each additional unit of education is more valuable than the one before, at least up to a point).  This leads them to concentrate their efforts on the children that are already doing the best.  This happens at multiple levels- poor parents pick one child to receive an education and put the rest to work much earlier, teachers put more of their energy into their best students.  Due to a combination of confirmation bias and active maneuvering, the children of rich parents are much more likely to be picked as The Best, regardless of their actual ability.   Not only does this get them more education, but education is viewed as proof one is smart, so they’re double winners.  This leaves some very smart children of poor parents operating well below their potential.

One solution to this is IQ tests.  Infosys, an Indian IT contractor, managed to get excellent workers very cheaply by giving IQ tests to adults and hiring those who scored well, regardless of education.  The authors describe experiments in Africa giving IQ tests to young children so that teachers will invest more in the smart but poor children.  This was one of the original uses of the SATs in America- identifying children who were very bright but didn’t have the money or connections to go to Ivy League feeder high schools.

This is more or less the opposite of how critics view standardized testing the US.  They believe the tests are culturally biased such that a small sliver of Americans will always do better, and that basing resource distribution on those tests disenfranchises the poor and people outside the white suburban subculture.  What’s going on here?

One possible explanation is that one group or the other is wrong, but both sides actually have pretty good evidence.  The IQ tests are obviously being used for the benefit of very smart poor children in the 3rd world.  And even tests without language can’t get around the fact that being poor takes up brainspace, and so any test will systematically underestimate poor children. So let’s assume both groups are right at least some of the time.

Maybe it’s the difference in educational style that matters?  In the 3rd world, teachers are evaluated based on their best student.  In the US, No Child Left Behind codified the existing emphasis on getting everyone to a minimum bench mark.    Kids evaluated as having lower potential than they actually do may receive less education than they should, but they still get some, and in many districts gifted kids get the least resources of any point on the bell curve.

Or it could be because the tests are trying to do very different things.  The African and Indian tests are trying to pick out the extremely intelligent who would otherwise be overlooked.  The modern US tests are trying to evaluate every single student and track them accordingly.  When the SATs were invented they had a job much like the African tests; as more and more people go to college its job is increasingly to evaluate the middle of the curve.  It may be that these are fundamentally different problems.

This has to say something interesting about the meaning of intelligence or usefulness of education, but I’m not sure what.

Links 5/22/15

Effective Social Justice Interventions: this is a great example of using EA as a technique to address areas the EA-as-philosophy sphere hasn’t touched.

The Last Day of Her Life:  a psychology researcher’s decision to and process of ending her life as her Alzheimer’s progresses.   Fun fact: state-sanctioned euthanasia requires you be mentally competent and have less than six months to live.  Alzheimer’s patients are mentally incompetent years before they die of the disease.

The (crime-related) Broken Window Theory states that low level visible crime (graffiti, litter) leads to more crime, of all varieties. It is most famous for being Rudy Guilani’s method for reducing crime in New York City.  My understanding was that that had been debunked, and NYC’s drop is crime was caused mostly by demographic trends.  But some researchers did some fairly rigorous tests of it and it held up.  Caveat: they tested visible crime’s evidence on other crimes of similar magnitude, not escalations like theft.

This week’s “beautiful theory killed by an ugly gang of facts” award goes to the meditation chapter of The Willpower Instinct, which promises fantastic benefits from the very beginning.  In fact it says that meditating badly is in some ways better for you than meditating well, because it is the practice of refocusing yourself after you become distracted that is so beneficial.  Unfortunately none of the studies cited show that exact things, and what they do show is a small effect on a noisy variable, in a small sample.

[I don’t want to be too hard on The Willpower Instinct.  It encourages you to do your own experiments and stick with what works, I found some of it helpful, and it’s good for getting yourself into a willpower mindset.  It’s just scientifically weaker than it would have you believe.]

Sine Rider: if xkcd was a video game.