Epistemic Spot Check: A Guide To Better Movement (Todd Hargrove)

Edit 7/20/17: See comments from the author about this review.  In particular, he believes I overstated his claims, sometimes by a lot.

 

This is part of an ongoing series assessing where the epistemic bar should be for self-help books.

Introduction

Thesis: increasing your physical capabilities is more often a matter of teaching your neurological system than it is anything to do with your body directly.  This includes things that really really look like they’re about physical constraints, like strength and flexibility.  You can treat injuries and pain and improve performance by working on the nervous system alone.  More surprising, treating these physical issues will have spillover effects, improving your mental and emotional health. A Guide To Better Movement provides both specific exercises for treating those issues and general principles that can be applied to any movement art or therapy.

The first chapter of this book failed spot checking pretty hard.  If I hadn’t had a very strong recommendation from a friend (“I didn’t take pain medication after two shoulder surgeries” strong), I would have tossed it aside.  But I’m glad I kept going, because it turned out to be quite valuable (this is what triggered that meta post on epistemic spot checking).  In accordance with the previous announcement on epistemic spot checking, I’m presenting the checks of chapter one (which failed, badly), and chapter six (which contains the best explanation of pain psychology I’ve ever seen), and a review of model quality.  I’m very eager for feedback on how this works for people.

Chapter 1: Intro (of the book)

Claim: “Although we might imagine we are lengthening muscle by stretching, it is more likely that increased range of motion is caused by changes in the nervous system’s tolerance to stretch, rather than actual length changes in muscles. ” (p. 5). 

Overstated, weak.  (PDF).  The paper’s claims to apply this up to 8 weeks, no further.  Additionally, the paper draws most (all?) of its data from two studies and it doesn’t give the sample size of either.

Claim:  “Research shows the forces required to deform mature connective tissue are probably impossible to create with hands, elbows or foam rollers.” (p. 5). 

Misleading. (Abstract).  Where by “research” the Hargrove means “mathematical model extrapolated from a single subject”.

Claim:  “in hockey players, strong adductors are far more protective against groin strain than flexible adductors, which offer no benefit” (p. 14).

Misleading. (Abstract) Sample size is small, and the study was of the relative strength of adductor to abductor, not absolute strength.

Claim: “Flexibility in the muscles of the posterior chain correlates with slower running and poor running economy.” (p. 14).

Accurate citation, weak study.  (Abstract) Sample size: 8.  Eight.  And it’s correlational.

[A number of interesting ideas whose citations are in books and thus inaccessible to me]

Claim:  “…most studies looking at measurable differences in posture between individuals find that such differences do not predict differences in chronic pain levels.”  (p. 31). 

Accurate citation.  (Abstract).  It’s a metastudy and I didn’t track down any of the 54 studies included, but the results are definitely quoted accurately.

 

Chapter 6: Pain

Claim: “Neuromatrix” approach to pain means the pattern of brain activity that create pain, and that pain is an output of brain activity, not an input (p93).

True, although the ability to correctly use definitions is not very impressive.

Claim: “If you think a particular stimulus will cause pain, then pain is more likely.  Cancer patients will feel more pain if they believe the pain heralds the return of cancer, rather than being a natural part of the healing process.” (p93).

Correctly cited, small sample size. (Source 1, source 2, TEDx Talk).

ClaimPsychological states associated with mood disorders (depression, anxiety, learned helplessness, etc) are associated with pain (p94).

True, (source), although it doesn’t look like the study is trying to establish causality.

ClaimMany pain-free people have the kinds of injuries doctors blame pain on (p95).

True, many sources, all with small sample sizes.  (source 1, source 2, source 3, source 4, source 5)

Claim: On taking some cure for pain, relief kicks in before the chemical has a chance to do any work (p98)

True.  His source for this was a little opaque but I’ve seen this fact validated many other places.

Claim: we know you can have pain without stimulus because you can have arm pain without an arm (p102).

True, phantom limb pain is well established.

Claim: some people feel a heart attack as arm pain because the nerves are very close to each other and the heart basically never hurts, so the brain “corrects” the signal to originating in the arm (p102).

First part: True.  Explanation: unsupported.  The explanation certainly makes sense, but he provides no citations and I can’t find any other source on it.

Claim: Inflammation lowers the firing threshold of nociceptors (aka sensitization) (p102).

True (source).

Claim: nociception is processed by the dorsal horn in the spine.  The dorsal horn can also become sensitized, firing with less stimulus than it otherwise would.  Constant activation is one of the things that increases sensitivity, which is one mechanism for chronic pain (p103).

True (source).

Claim: people with chronic pain often have poor “body maps”, meaning that their mental model of where they are in space is inaccurate and they have less resolution when assessing where a given sensation is coming from (p107).

Accurate citation (source).  This is a combination of literature review and reporting of novel results.  The novel results had a sample of five.

Claim: The hidden hand in the rubber hand illusion experiences a drop in temperature (p109).

Accurate citation, tiny sample size (source).  This paper, which is cited by the book’s citation, contains six experiments with sample sizes of fifteen or less.  I am torn between dismissing this because cool results with tiny sample sizes are usually bullshit, and accepting it because it is super cool.

Claim: “a hand that has been disowned through use of the rubber hand illusion will suffer more inflammation in response to a physical insult than a normal hand.” (p. 109).

Almost accurate citation (source).  The study was about histamine injection, not injury per se.   Insult technically covers both, but I would have preferred a more precise phrasing.  Also, sample size 34.

Claim: People with chronic back pain have trouble perceiving the outline of their back (p. 109). 

Accurate citation, sample size six (pdf).

Claim:  “Watching the movements in a mirror makes the movements less painful [for people with lower back pain].” (p. 111). Better Movement. Kindle Edition.

Accurate citation, small sample size (source).

Model Quality

Reminder: the model is that pain and exhaustion are a product of your brain processing a variety of information.  The prediction is that improving the quality of processing via the principles explained in the book can reduce pain and increase your physical capabilities.

Simplicity: Good.  This is not actually simple model, it requires a ton of explanation to a layman.  But most of its assumptions come from neurology as a whole; the leap from “more or less accepted facts about neurology” to this model is quite small.

Explanation Quality: Fantastic.  I’ve done some reading on pain psychology, much of which is consistent with Guide…, but Guide… has by far the best explanation I’ve read.

Explicit Predictions: Good, kept from greatness only by the fact that brains and bodies are both very complicated and there’s only so much even a very good model can do.

Useful Predictions: Okay. The testable prediction for the home-reader is that following the exercises in the back of the book, or going to a Feldenkrais class, will treat chronic pain, and increase flexibility and strength.  Since the book itself admits that a lot of things offer short term relief but don’t address the real problem, helping immediately doesn’t prove very much.

Acknowledging Limitations: low. (Note: author disputes this, and it’s entirely possible he did and I forgot).  GTBM doesn’t have the grandiose vision of some cure-all books, and repeatedly reminds you that your brain being involved doesn’t mean your brain is in control.  But there’s no sentence along the lines of “if this doesn’t work there’s a mechanical problem and you should see a doctor.”

Measurability: low.  This book expects you to put in a lot of time before seeing results, and does not make a specific prediction of the form they will come in.  Worse, I don’t think you can skip straight to the exercises.  If I hadn’t read the entire preceding book I wouldn’t have approached them in the correct spirit of attention and curiosity.

Hmmm, if I’d assigned a gestalt rating it would have been higher than what I now think is merited based on the subscores.  I deliberately wrote this mostly before trying the exercises, so I can’t give an effectiveness score.  If you do decide to try it, please let me know how it goes so I can further calibrate my reviews to actual effectiveness.

 

You might like this book if…

…you suffer from chronic pain or musculoskeletal issues, or find the mind-body connection fascinating.

This post supported by Patreon.

Review: The Dueling Neurosurgeons (Sam Kean)

If you like this blog, you might like…

I originally intended The Tale of the Dueling Neurosurgeons for epistemic spot checking, but it didn’t end up feeling necessary.  I know just enough neurobiology and psychology to recognize some of its statements as true without looking them up, and more were consistent enough with what I knew and what good science and good science writing looks like; interrogating the book didn’t seem worth the trouble.  I jumped straight to learning from it, and do not regret this choice.  The first thing I actually looked up came 20% of the way into the book, when the author claimed the facial injuries of WWI soldiers inspired the look of the Splicers from BioShock.*

[*This is true. He used the word generic mutant, not the game-specific term Splicer, but I count that under “acceptable simplifications for the masses”.  Also, he is quicker to point out that he is simplifying than any book I can remember.]

At this point it may be obvious why I think fans of this blog will really enjoy this book, beyond the fact that I enjoyed it.  It has a me-like mix of history (historical color, “how we learned this fact”, and “here’s this obviously stupid alternate explanation and why it looked just as plausible if not more so at the time”*), actual science at just the right level of depth, and fun asides like “a lot of data we’ve been talking in this chapter on phantom limbs about comes from the Civil War.  Would you like to know why there were so many lost limbs in the Civil War?  You would?  Well here’s two pages on the physics of rifles and bullets.”**

[*For example, the idea that the brain was at all differentiated was initially dismissed as phrenology 2.0.

**I’m just going to assume you want the answer: before casings were invented, rifles had a trade off between accuracy and ease of use.  Bullets that precisely fit the barrel are very hard to load, bullets smaller than the barrel can’t be aimed with any accuracy.  Some guy resolved this by creating bullets that expanded when shot.  But that required a softer metal, so when the bullet hit it splattered.  This does more damage and is much harder to remove.]

I am more and more convinced that at least through high school, teaching science independent of history of science is actively damaging, because it teaches scientific facts, and treating things as known facts damages the scientific mindset.  “Here is the Correct Thing please regurgitate it” is the opposite of science.  What I would really love to see in science classes is essentially historical reenactments.  For very young kids, give them the facts as we knew them in 18XX, a few competing explanations, and experiments with which to judge them (biased towards practical ones you know will give them informative results), but let them come to their own conclusions.  As they get older, abandon them earlier and earlier in the process; first let them create their own experiments, then their own hypotheses, and eventually their own topics.  Before you know it they’re in grad school.

The Dueling Neurosurgeons would be a terrible textbook for the lab portion of that class because school districts are really touchy about inducing brain damage.  But scientists had a lot of difficulty getting good data on the brain for the exact same reason, and Dueling Neurosurgeons is an excellent representation of that difficulty.  How do we learn when the subject is immensely complex and experiments are straightjacketed?  I also really enjoyed the exploration of  the entanglement between what we know and how we know it.  I walked away from high school science feeling those were separable, but they’re not.

You might like this book if you:

  • like the style of this blog. In particular, entertaining asides that are related to the story but not the point. (These are mostly in footnotes so if you don’t like them you can ignore them).
  • are interested in neurology or neuropsychology at a layman’s level.
  • share my fascination with history of science.
  • appreciate authors who go out of their way to call out simplifications, without drowning the text in technicalities.

You probably won’t like this book if you:

  • need to learn something specific in a hurry.
  • are squeamish about graphic descriptions of traumatic brain damage.
  • are actually hoping to see neurosurgeons duel.  That takes up like half a chapter, and by the standards of scientists arguing it’s not very impressive.

The tail end of the book is either less interesting or more familiar to me, so if you find your interest flagging it’s safe to let go.

This post supported by patreon

Dreamland: bad organic chemistry edition

I am in the middle of a post on Dreamland (Sam Quinones) and how it is so wrong, but honestly I don’t think I can wait that long so here’s an easily encapsulated teaser.

On page 39 Quinones says “Most drugs are easily reduced to water-soluble glucose…Alone in nature, the morphine molecule rebelled.”  I am reasonably certain that is horseshit.  Glucose contains three kinds of atoms- carbon, oxygen, and hydrogen.  The big three of organic chemicals.  Your body is incapable of atomic fusion, so the atoms it starts with are the atoms it ends up with, it can only rearrange them into different molecules.  Morphine is carbon, oxygen, hydrogen, and nitrogen, and that nitrogen has to go somewhere, so I guess technically you can’t reform it into just sugar.  But lots of other medications have non-big-3 atoms too (although, full disclosure, when I spot checked there was a lot less variety than I expected).

This valorization of morphine as the indigestible molecule is equally bizarre.  Morphine has a half-life of 2-3 hours (meaning that if you have N morphine in your body to start with, 2-3 hours later you will have N/2).  In fact that’s one of the things that makes it so addictive- you get a large spike, tied tightly it with the act of ingestion, and then it goes away quickly, without giving your body time to adjust.  Persistence is the opposite of morphine’s problem.

This is so unbelievably wrong I would normally assume the author meant something entirely different and I was misreading.  I’d love to check this, but the book cites no sources, and the online bibliography doesn’t discuss this particular factoid.  I am also angry at the book for being terrible in general, so it gets no charity here.

Talking about controversial things (discussion version)

There is a particular failure pattern I’ve seen in many different areas.  Society as a whole holds view A on subject X.  A small sub-group holds opposing view B.   Members of the sub group have generally put more thought into subject X and they have definitely spent more time arguing about it than the average person on the street.  Many A-believes have never heard of View B or the arguments for it before.

A relative stranger shows up at a gathering of the subgroup and begins advocating view A, or just questioning view B.  The sub-group assumes this is a standard person who has never heard their arguments and launches into the standard spiel.  B doesn’t listen, A gets frustrated and leaves the subgroup, since no one is going to listen to their ideas.

One possibility is that the stranger is an average member of society who genuinely believes you’ve gone your entire life without hearing the common belief and if they just say it slowly and loud enough you’ll come around.*  Another possibility is they understand view B very well and have some well considered objections to it that happen to sound like view A (or don’t sound that similar but the B-believer isn’t bothering to listen closely enough to find out).  They feel blown off and disrespected and leave.

In the former scenario, the worst case is that you lose someone you could have recruited.  Oh well.  If the latter, you lose valuable information about where you might be wrong.  If you always react to challenges this way you become everything hate.

For example: pop evolutionary psychology is awful and people are right to ignore it.  I spent years studying animal behavior and it gave me insights that fall under the broad category of evopsych, except for they are correct.  It is extremely annoying to have those dismissed with “no, but see, society influences human behavior.”

Note that B doesn’t have to be right for this scenario to play out.  Your average creationist or anti-vaxxer has thought more about the topic and spent more time arguing it than almost anyone.  If an ignorant observer watched a debate and chose a winner based on fluidity and citations they would probably choose the anti-vaxxer.  They are still wrong.

Or take effective altruism.  I don’t mind losing people who think measuring human suffering with numbers is inherently wrong.  But if we ignore that entire sphere we won’t hear the people who find the specific way we are talking dehumanizing, and have suggestions on how to fix that while still using numbers.  A recent facebook post made me realize that the clinical tone of most EA discussions plus a willingness to entertain all questions (even if the conclusion is abhorrent) is going to make it really, really hard for anyone with first hand experience of problems to participate.  First hand experience means Feelings means the clinical tone requires a ton of emotional energy even if they’re 100% on board intellectually.  This is going to cut us off from a lot of information.

There’s some low hanging fruit to improve this (let people talk before telling them they are wrong), but the next level requires listening to a lot of people be un-insightfully wrong, which no one is good at and EAs in particular have a low tolerance for.

Sydney and I are spitballing ideas to work on this locally.  I think it’s an important problem at the movement-level, but do not have time to take it on as a project.**  If you have thoughts please share.

*Some examples: “If you ate less and exercised more you’d lose weight.”  “If open offices bother you why don’t you use headphones?”, “but vaccines save lives.”, “God will save you…”/”God isn’t real”, depending on exactly where you are.

**Unexpected benefit of doing direct work: 0 pangs about turning down other projects.  I can’t do everything and this is not my comparative advantage.

Unquantified Self

Recently I did a CFAR workshop.  No one has settled on a good description of CFAR, but I think a good one would be “getting the different parts of your brain to coordinate with each other.”  The further I get from CFAR the more positively I view the experience, which suggests that I did the same thing with EA Global, which suggests I overestimated CFAR’s primary flaw (not being EA Global), which makes me view it even more positively.

CFAR suggests you go into the workshop with a problem to solve.  Fortunately but perhaps inconveniently, I went through a personal growth spurt right before CFAR.  It’s not that I was out of problems to solve, but the repercussions of the previous solutions had not yet worked their way through the system so it was hard to see what the next round would be.   Then I solved food.  For those of you who are just tuning in, I have/had lifelong medical issues that made food physically difficult, which made it psychologically difficult, which made the physical issues worse.  Clearing out all the anxiety around food in a weekend is not a small thing.  But to really achieve it’s full power I have to follow it up with things like “how do you choose food based on things other than terror?” and “stoves: how do they work?” So that’s a bunch more work.*

I left CFAR with some new things and some refinement on some old things.  I didn’t want to lose what I’d gotten at the workshop so I tried to do follow ups but I felt… full.  Defensive.  Like it was attempting to take up space in my brain and if it succeeded I would lose a lot of work.

My way of solving problems, which is either what CFAR teaches too or what I extracted from whatever CFAR actually does, is to understand them super well.  Eventually a solution just falls out and application is trivial.**  Some of this comes from focused thought, but there’s a lot of opportunistic noticing.  I store bits of information until suddenly they coalesce into a pattern.  As anyone who’s read Getting Things Done will tell you, storing information in your brain is expensive.  So I decided I needed a way to store all this opportunistic data, plus things from the conscious experiments I was running, to keep it all straight.

This is hard to do.  Take the comparatively simple “go to gym every day”.  There are 400 billion apps that will track this for me and I have never stuck with one of them, because they are boring and seeing the numbers go up doesn’t motivate me for more than a week.  More generally, I’ve never been able to get into quantified self because if I know what data to measure the problem is already solved.  I don’t really care how many calories I burned.  I do care what mental blocks inhibited me from going (bed so comfy, outside is cold, feeling like I stayed in bed too long and now I have to do Real Work) and how I maneuvered things so it didn’t take willpower to fight those (“remember how you feel much more productive after the gym and have an awesome job that doesn’t care when you work?”).  There is no app for that.

Then there are more difficult problems like “New information indicates I handled something 9 months ago really poorly, but I’m not sure what I’d do differently then with only the information I had at the time, without causing other problems.”  Or “My friend triggered an intense premortem that made me realize I’m ignoring information on project X just like I did with project Y last year, but I don’t know what that information is.”  I still don’t know what I’m going to do about the former, for the latter I tracked “things that feel like they’re hitting the same part of my brain” until a pattern emerged.  Tracking patterns for “things you are actively trying not to think about” is not cheap.

So I needed a system that could hold this information for me, that would show me information I didn’t realize was connected as I recorded it.  Without being cluttered.  The closest analogy I could come up with was an old timey naturalist.  They had a bunch of set things they knew they were looking for (what eats this flower), but also wanted to record cool things and then be able to connect to other cool things later (why are all these finches different yet similar?).    I don’t know how old timey naturalists did that with pen and paper because that did not work for me at all.  I tried workflowy and a google docs but just sat there frozen, unable to figure out how to sort the information.

My CFAR partner  Tilia Bell had a really good idea, which was to use a private wordpress blog.  I could give an entry as many tags as I wanted, and read tags when they felt relevant.  Or just the success tag, because winning feels nice.  This was a huge improvement, but wordpress is kind of clunky and annoying.  In particular, the tagging system does not flow at all.

I talked about it with Brian, who suggested a one person slack.  I could use channels for defined projects and tags for observations I wanted to connect later.  To be fair, this idea is three hours old.  On the other hand, in 20 minutes applying it I figured out what piece of information I was ignoring in that problem my brain didn’t want to look at.  I’m not saying it’s the sole cause, I’ve gathered a lot of information this past week.  But since “connecting things I already noticed” is pretty much its point, it seems promising.

*My nutritionist is finding me much easier to work with now.

**I’m exaggerating some but it’s more true than it has any right to be.

How Does Amazon Convince Anyone To Work For Them?

Amazon is in that club of employers (Google, Twitter, Facebook, Microsoft, etc), where working there functions as a stamp of quality.  Their employees are frequently cold called by recruiters working for other members of the club, middle tier companies, and start ups that cannot get enough people through their personal network.  Amazon pays very well relative to most jobs, even many programming jobs, but it does not pay as well as other members of the club.  The salary is just a little less than you’d make elsewhere, but equity and bonuses are backloaded such that many people are driven out before they receive the bulk of them.  The health insurance isn’t as good.  I realize paying for your own lunch is normal, but Amazon makes employees pay for a lot of things other companies offer for free, like ergonomic keyboards.  And then there’s the work environment.

How does Amazon maintain a talent pool equivalent to the other prestige club members while paying less?

This is anecdotal, but my friends at Amazon are much more likely to have come from unprestigious companies or schools than my friends at other club companies.  Working at Amazon doesn’t make them smarter, but it does provide widely-accepted proof of their intelligence that they didn’t have before, and can leverage into cushier jobs later.   In some ways Amazon’s reputation for chewing people up and spitting them out is a feature here, because leaving after 18 months raises 0 questions among other employers.

So my hypothesis is Amazon invests more in finding and vetting smart people who aren’t currently holding Official Smart Person Cards, and that part of employees’ compensation is getting that card.  In this way it’s like the US Armed Forces, which are grueling and don’t pay well but people tend to leave them with many more options than they started with.

I’m unconvinced this is a winning strategy.  Operational turnover is expensive, and bad working conditions decrease people’s productivity even when they’re well compensated.  But it does at least explain why it hasn’t collapsed already.

In Defense Of The Sunk Cost Fallacy

Dutch disease is the economic concept that if a country is too rich in one thing, especially a natural resource, every other sector of the economy will rot because all available money and talent will flow towards that sector.  Moreover, that sector dominates the exchange rate, making all other exports uncompetitive.*  It comes up in foreign development a lot because charitable aid can cause dutch disease: by paying what the funders would consider a “fair wage”, charities position themselves as by far the best employers in the area.  The best and the brightest African citizens end up chauffering foreigners rather than starting their own businesses, which keeps the society dependent on outside help.  Nothing good comes from having poverty as your chief export.

I posit that a similar process takes place in corporations.  Once they are making too much money off a few major things (Windows, Office, AdWords, SUVs), even an exceptionally profitable project in a small market is too small to notice.  Add in the risk of reputation damage and the fact that all projects have a certain amount of overhead regardless of size, and it makes perfect sense for large companies to discard projects a start up would kill for (RIP Reader).**

That’s a fine policy in moderation, but there are problems with applying it too early.  Namely, you never know what something is going to grow into.  Google search originally arose as a way to calculate impact for academic papers. The market for SUVs (and for that matter, cars) was 0 until someone created it.  If you insist on only going after projects that directly address an existing large market, the best you’ll ever be is a fast follower.***

Simultaneously, going from zero to an enormous, productive project is really, really hard (see: Fire Phone, Google+, Facebook’s not-an-operating-system).  Even if you have an end goal in mind, it often makes sense to start small and iterate.  Little Bets covers this in great detail.  And if you don’t have a signed card from G-d confirming your end goal is correct, progressing in small iterative steps gives you more information and more room to pivot.

More than one keynote at EA Global talked about the importance of picking the most important thing, and of being willing to switch if you find something better.  That’s obviously great in in some cases, but I worry that this hyperfocusing will cause the same problems for us that it does at large companies: a lack of room to surprise ourselves.  For example, take the post I did on interpretive labor.  I was really proud of that post.  I worked hard on it.  I had visions of it helping many people in their relationships.  But if you’d asked at the time, I would have predicted that the Most Effective use of my time was learning programming skills to increase my wage or increase my value in direct work, and that that post was an indulgence.   It never in my wildest dreams occurred to me it would be read by someone in a far better position than me to do something about existential risk and be useful to them in connecting two key groups that weren’t currently talking to each other, but apparently it did.  I’m not saying that I definitely saved us from papercliptopia, but it is technically possible that that post (along with millions of other flaps of butterfly wings) will make the marginal difference.  And I would never have even known it did so except the person in question reached out to me at EA Global.****

Intervention effectiveness may vary by several orders of magnitude, but if the confidence intervals are just as big it pays to add a little wiggle to your selection.  Moreover, constant project churn has its own cost: it’s better to finish the third best thing than have to two half finished attempts at different best things.  And you never know what a 3rd best project will teach you that will help an upcoming best project- most new technological innovations come from combining things from two different spheres (source), so hyperfocus will eventually cripple you.

In light of all that, I think we need to stop being quite so hard on the sunk cost fallacy.  No, you should not throw good money after bad, but constantly re-evaluating your choices is costly and (jujitsu flip) will not always be most efficient use of your resources.  In the absence of a signed piece of paper from G-d, biasing some of your effort towards things you enjoy and have comparative advantage in may in fact be the optimal strategy

Using your own efficiency against you

My hesitation is that I don’t know how far you can take this before it stops being effective altruism and starts being “feel smug and virtuous about doing whatever it is you already wanted to do”- a thing we’re already accused of doing.  Could someone please solve this and report back?  Thanks.

* The term comes from the Dutch economic crash following the discovery of natural gas in The Netherlands.  Current thought is that was not actually Dutch disease, but that renaming the phenomenon after some third world country currently being devastated by it would be mean.

*Simultaneously, developers have become worse predictors of the market in general. Used to be that nerds were the early adopters and if they loved it everyone would be using it in a year (e.g. gmail, smart phones).  As technology and particularly mobile advances, this is no longer true.  Nerds aren’t powerusers for tablets because we need laptops, but tablet poweruser is a powerful and predictive market.  Companies now force devs to experience the world like users (Facebook’s order to use Android) or just outright tell them what to do (Google+).  This makes their ideas inherently less valuable than they were.  I don’t blame companies for shifting to a more user-driven decision making process, but it does make things less fun.

**Which, to be fair, is Microsoft’s actual strategy

***It’s also possible it accomplished nothing, or makes it worse.  But the ceiling of effectiveness is higher than I ever imaged and the uncertainty only makes my point stronger.