How Accurate Do Citations Need to be?

As part of an investigation in how much capacity for thought work humans actually have in a day, I read Ericsson, Krampe, and Tesch-Römer’s 1993 paper, The Role of Deliberate Practice in the Acquisition of Expert Performance (PDF). This paper is important because if you ask people how much thought work can be done in a day, if they have an answer and a citation at all, it will be “4 hours a day” and “Cal Newport’s Deep Work“. Newport in turn cites the Ericsson paper. I checked Ericsson et al‘s sources, but have hit something of a conundrum.

One specific claim in the paper, the first one relevant to my question, is:

When individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day

The source for this is the final chapter of Developing Talent in Young People, by Benjamin S. Bloom. That chapter states “…[D]uring the week the [piano] teacher expected the child to practice about an hour a day.” with descriptions of practice but no quantification given for swimming and math (p515).

I don’t think Ericsson et al‘s summary is accurate.  “Teachers in one specific domain expect one hour of practice a day” is not the same as “In any domain, all individuals do one hour or less.” They differ in the generality of the statement, and one is about expectations, the other achievement.

How much should I penalize the paper for that inaccurate summary, especially given that I don’t think their statement is actually false (who practices a new hobby more than an hour a day?), just that it failed to validate itself within the narrow confines of peer review? Do I conclude Ericsson, Krampe, and Tesch-Römer are inattentive, or that they had a thing they wanted to say and looked for the nearest source to justify it in the way required by peer reviewed papers.

This is harder for me because while it’s the first citation in the paper that I checked, it was actually the last I looked up, because everything else was online and this required interlibrary loan. I already had my opinion and was doing this out of thoroughness. I’m deliberately not sharing that opinion here, because I want others to consider the quote in isolation.

7 thoughts on “How Accurate Do Citations Need to be?”

  1. I doubt that this level of sloppiness/dishonesty is rare, but I think it’s very bad, in part because it can produce complete bullshit within a few iterations. (Author 5 sloppily paraphrases author 4’s overgeneralisation of author 3’s slight misunderstanding of…)

  2. “How much should I penalize the paper” seems like the wrong frame.

    Ultimately it’s all just people saying things, under a variety of genre constraints. If an acquaintance told you, “when individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day,” and when asked why they thought this, brought up the piano anecdote, I think you’d know how to think about that – that they have a tendency to exaggerate how much they know about things, such that you can’t take what they say at face value, but there’s usually some evidence for a related but more modest claim.

    1. I think the right stance in response to something like this is to assume that all of the authors’ summaries of what evidence shows are tendentious. Direct concrete descriptions of things they observed (or of chains of evidence about things) are less likely to be lies, and analytic arguments you can still evaluate directly. These are uncorrelated enough that it might be worth reading the rest of paper anyway, depending on how clear it is.

  3. I hate this and I’ve run into similar problems before. I left a comment on the Slate Star Codex post “Rule Thinkers In, Not Out” (https://slatestarcodex.com/2019/02/26/rule-genius-in-not-out/) describing my version of it, and it seems even more relevant here, so I’ll just copy it:
    —-
    This reminds me of a problem I’ve run into a few times, where someone’s presenting themselves as an expert, and they tell me:

    1) Some stuff that sounds reasonable that matches with common wisdom I’ve heard before
    2) Some stuff that I can’t really evaluate
    3) One thing that I know for a fact is false

    And I’ve wondered about how to update based on this.

    In one case, it was the cookbook Nourishing Traditions, in which the author claims to have done a bunch of research on historic food, and thus come up with advice and recipes for how to eat more healthily.

    1) Avoid refined sugar? Sure, makes sense.
    2) Avoid raw foods and ferment food as much as possible? Can’t really evaluate, seems plausible.
    3) Tuna is a low mercury fish, because mercury sinks to the bottom of the ocean and tunas live high in the water column? Absolutely wrong and not how mercury works.

    The book is trying to push weird novel ideas, and the mercury advice was one small note in a side column, so maybe I should just discard it and look at the main food advice. On the other hand, she’s supposedly getting the main food advice from all this research she did, and it makes me reluctant to trust the quality of her research if she got wrong this one thing I can actually evaluate.

    The other is a GP who recommended me, among other things, a fad pseudoscience diet. ¯\_(ツ)_/¯
    —-
    And, yeah, I’m still frustrated by these kinds of things and don’t have an answer.

    1. She’s lying when she says she’s getting the main food advice from all this research she did. The actual research to evaluate is Weston Price’s Nutrition and Physical Degeneration, and Nourishing Traditions does a good job collecting practical advice about how to implement a bunch of practices common to time-tested preindustrialized foodways, but the “science” sidebars are about as reliable as the comments on quantum vibrations were at the Vipassana retreat I went to, i.e. not at all. (But meditation works, and so does sourdough!)

    2. There’s a different mixture of shoddiness and knowledge going on with people like Dave Asprey, where instead of trying to sell an existing time-tested body of knowledge by layering a bunch of cherry-picked citations and spurious reasoning, he seems to have actually good taste about new stuff, middling ability to explain it, and then markets his proprietary stuff as though it was more special than it is.

    3. Another example: I wrote There is a war in part because while David Graeber’s book on the history of debt and money makes some great theoretical points, I’m uncomfortable recommending it given his track record on getting facts right or citations backing up his factual claims, so I wanted a purely conceptual treatment to point to for some of it.

Comments are closed.