Criticism as Entertainment

Media Reviews

There is a popular genre of video that consist of shitting on other people’s work without any generative content. Let me provide some examples.

First, Cinema Sins. This is the first video I selected when looking for a movie I’d seen with a Cinema Sins I hadn’t (i.e. it’s not random, but it wasn’t selected for being particularly good or bad).

The first ten sins are:

  1. Use of a consistent brand for props in the movie they’d have to have anyway, unobtrusively enough that I never noticed until Cinema Sins pointed it out.
  2. A character being mildly unreasonable to provoke exposition.
  3. The logo
  4. Exposition that wasn’t perfectly justified in-story
  5. A convenience about what was shown on screen
  6. A font choice (from an entity that in-universe would plausibly make bad font choices)
  7. An omission that will nag at you if you think about it long enough or expect the MCU to be a completely different thing, with some information about why it happened.
  8. In-character choices that would be concerning in the real world and I would argue are treated appropriately by the movie, although reasonable people could disagree
  9. Error by character that was extremely obviously intentional on the part of the film makers. There is no reasonable disagreement on this point.
  10. An error perfectly in keeping with what we know about the character.

Of those, three to four could even plausibly be called sins of the movie- and if those bother you, maybe the MCU is not for you. The rest are deliberate choices by filmmakers to have characters do imperfect things. Everyone gets to draw their own line on characters being dumb- mine is after this movie but before 90s sitcoms running on miscommunication- but that’s irrelevant to this post because Cinema Sins is not helping you determine where a particular movie is relative to your line. Every video makes the movie sound exactly as bad as the last, regardless of the quality of the underlying movie. It’s like they analyze the dialogue sentence by sentence and look to see if there’s anything that could be criticized about it.

Pitch Meeting is roughly as useful, but instead of reacting to sentences, it’s reading the plot summary in a sarcastic tone of voice.

Pitch Meeting is at least bringing up actual problems with Game of Thrones season 8. But I dare you to tell if early Game of Thrones was better or worse than season 8, based on the Pitch Meeting.

I keep harping on “You can’t judge movie quality by the review”, but I don’t actually think that’s the biggest problem. Or rather, it’s a subset of the problem, which is you don’t learn anything from the review: not whether the reviewer considered the movie “good” or not, and not what could be changed to do make it better. Contrast with Zero Punctuation, a video game review series notorious for being criticism-as-entertainment, that nonetheless occasionally likes things, and at least once per episode displays a deep understanding of the problems of a game and what might be done to fix it.

Why Are You Talking About This?

It’s really, really easy to make something look bad, and the short-term rewards to doing so are high. You never risk looking stupid or having to issue a correction. It’s easier to make criticism funny. You get to feel superior. Not to mention the sheer joy in punishing bad things. But it’s corrosive. I’ve already covered (harped on) how useless shitting-on videos are for learning or improvement, but it goes deeper than that. Going in with those intentions changes how you watch the movie. It makes flaws more salient and good parts less so. You become literally less able to enjoy or learn from the original work.

Maybe this isn’t universal, but for me there is definitely a trade off between “groking the author’s concepts” and “interrogating the author’s concepts and evidence”. Groking is a good word here: it mostly means understand, but includes playing with the idea and applying it what I know.  That’s very difficult to do while simultaneously looking for flaws.

Should it be, though? Generating testable hypotheses should lead to greater understanding and trust or less trust, depending on the correctness of the book. So at least one of my investigation or groking procedures are wrong.

 

What we Know vs. How we Know it?

Two weeks ago I said:

The other concept I’m playing with is that “what we know” is inextricable from “how we know it”. This is dangerously close to logical positivism, which I disagree with my limited understanding of. And yet it’s really improved my thinking when doing historical research.

I have some more clarify on what I meant now. Let’s say you’re considering my ex-roommate, person P, as a roommate, and ask me for information. I have a couple of options.

Scenario 1: I turn over chat logs and video recordings of my interactions with the P. 

E.g., recordings of P playing music loudly and chat logs showing I’d asked them to stop.

Trust required: that the evidence is representative and not an elaborate deep fake.

Scenario 2: I report representative examples of my interactions with P.

E.g., “On these dates P played music really loudly even when I asked them to stop.”

Trust required: that from scenario 1, plus that I’m not making up the examples.

Scenario 3: I report summaries of patterns with P

E.g., “P often played loud music, even when I asked them to stop”

Trust required: that from scenario 2, plus my ability to accurately infer and report patterns from data.

Scenario 4: I report what a third party told me

E.g. “Mark told me they played loud music a lot”

Trust required: that from scenario 3, plus my ability to evaluate other people’s evidence

Scenario 5: I give a flat “yes good” or “no bad” answer.

E.g., “P was a bad roommate.”

Trust required: that from scenario 3 and perhaps 4, plus that I have the same heuristics for roommate goodness that you do.

 

 

The earlier the scenario, the more you can draw your own conclusions and the less trust you need to have in me. Maybe you don’t care about loud music, and a flat yes/no would drive you away from a roommate that would be fine for you. Maybe I thought I was clear about asking for music to stop but my chat logs reveal I was merely hinting, and you are confident you’ll be able to ask more directly. The more specifics I give you, the better an assessment you’ll be able to make.

Here’s what this looks like applied to recent reading:

Scenario 5: Rome fell in the 500s AD.

Even if I trust your judgement, I have no idea why you think this or what it means to you.

Scenario 4: In Rome: The Book, Bob Loblaw says Rome Fell in the 500s AD.

At least I can look up why Bob thinks this.

Scenario 3: Pottery says Rome fell between 300 and 500 AD.

Useful to experts who already know the power of pottery, but leaves newbies lost.

Scenario 2: Here are 20 dig sites in England. Those dated before 323 (via METHOD) contain pottery made in Greece (which we can identify by METHOD), those after 500 AD show cruder pottery made locally.

Great. Now my questions are “Can pottery evidence give that much precision?” and “Are you interpreting it correctly?”

Scenario 1: Please enjoy this pile of 3 million pottery shards.

Too far, too far.

 

In this particular example (from The Fall of Rome), 2-3 was the sweet spot. It allowed me to learn as much as possible with a minimum of trust. But there’s definitely room in life for 4; you can’t prove everything in every paper and sometimes it’s more efficient to offload it.

I don’t view 5 as acceptable for anything that’s trying to claim to be evidenced based, or at least, any basis besides “Try this and see if it helps you.” (which is a perfectly fine basis if it’s cheap).

 

ESC Process Notes: Detail-Focused Books

When I started doing epistemic spot checks, I would pick focal claims and work to verify them. That meant finding other sources and skimming them as quickly as possible to get their judgement on the particular claim. This was not great for my overall learning, but it’s not even really good for claim evaluation: it flattens complexity and focuses me on claims with obvious binary answers that can be evaluated without context. It also privileges the hypothesis by focusing on “is this claim right?” rather than “what is the truth?”.

So I moved towards reading all of my sources deeply, even if my selection was inspired by a particular book’s particular claim. But this has its own problems.

In both The Oxford Handbook of Children and Childhood Education in the Ancient World and Children and Childhood in Roman Italy, my notes sometimes degenerate into “and then a bunch of specifics”. “Specifics” might mean a bunch of individual art pieces, or a list of books that subtly changed a field’s framing.  This happens because I’m not sure what’s important and get overwhelmed.

Knowledge of importance comes from having a model I’m trying to test. The model can be external to the focal book (either from me, or another book), or from it. E.g. I didn’t have a a particular frame on the evolution of states before starting Against the Grain, but James C. Scott is very clear on what he believes, so I can assess how relevant various facts he presents are to evaluating that claim.

[I’m not perfect at this- e.g., in The Unbound Prometheus, the author claims that Europeans were more rational than Asians, and that their lower birth rate was evidence of this. I went along with that at the time because of the frame I was in, but looking back, I think that even assuming Europe did have a lower birth rate, it wouldn’t have proved Europeans were more rational or scientifically minded. This is a post in itself.]

If I’d come into The Oxford Handbook of Children and Childhood Education in the Ancient World or Children and Childhood in Roman Italy with a hypothesis to test, it would have been obvious information was relevant and what wasn’t. But I didn’t, so it wasn’t, and that was very tiring.

The obvious answer is “just write down everything”, and I think that would work with certain books. In particular, it would work with books that could be rewritten in Workflowy: those with crisp points that can be encapsulated in a sentence or two and stored linearly or hierarchically. There’s a particular thing both books did that necessitated copying entire paragraphs because I couldn’t break it down into individual points.

Here’s an example from Oxford Handbook…

“Pietas was the term that encompassed the dutiful respect shown by the Romans towards their gods, the state, and members of their family (Cicero Nat. Deor. 1.116; Rep. 6.16; O . 2.46; Saller 1991: 146–51; 1998). is was a concept that children would have been socialized to understand and respect from a young age. Between parent and child pietas functioned as a form of reciprocal dutiful affection (Saller 1994: 102–53; Bradley 2000: 297–8; Evans Grubbs 2011), and this combination of “duty” and “affection” helps us to understand how the Roman elite viewed and expressed their relationship with their children.”

And from Children and Childhood…

“No doubt families often welcomed new babies and cherished their children, but Roman society was still struggling to establish itself even in the second century and many military, political, and economic problems preoccupied the thoughts and activities of adult Romans”

I summarized that second one as “Families were distracted by war and such up through 0000 BC”, which is losing a lot of nuance. It’s not impossible to break these paragraphs down into constituent thoughts, but it’s ugly and messy and would involve a lot of repetition. The first mixing up what pietas is with how and who it was expressed to. The second is combining a claim about the state of Rome with the state’s effects.

This reveals that calling the two books “lists of facts” was incomplete. Lists of facts would be easier to take notes on.  These authors clearly have some concepts they are trying to convey, but because they’re not cleanly encapsulated in the author’s own mind it’s hard for me to encapsulate them. It’s like trying to lay the threads of a gordian knot in an organized fashion.

So we have two problems: books which have laid out all their facts in a row but not connected them, and books which have entwined their facts too roughly for them to be disentangled. These feel very similar to me but when I write it out the descriptions sure sound like two completely different problems.

Let me know how much sense this makes, I can’t tell if I’ve written something terribly unpolished-but-deep or screamingly shallow.

ESC Process Notes: Claim Evaluation vs. Syntheses

Forgive me if some of this is repetitive, I can’t remember what I’ve written in which draft and what’s actually been published, much less tell what’s actually novel. Eventually there will be a polished master post describing my overall note taking method and leaving out most of how it was developed, but it also feels useful to discuss the journey.

When I started taking notes in Roam (a workflowy/wiki hybrid), I would:

  1. Create a page for the book (called a Source page), with some information like author and subject (example)
  2. Record every claim the book made on that Source page
  3. Tag each claim so it got its own page
  4. When I investigated a claim, gather evidence from various sources and list it on the claim page, grouped by source

This didn’t make sense though: why did some sources get their own page and some a bullet point on a claims page? Why did some claims get their own page and some not? What happened if a piece of evidence was useful in multiple claims?

Around this time I coincidentally had a call with Roam CEO Conor White-Sullivan to demo a bug I thought I had found. There was no bug, I had misremembered the intended behavior, but this meant that he saw my system and couldn’t hide his flinch. Aside from wrecking performance, there was no need to give each claim its own page: Roam has block references, so you can point to bullet points, not just pages.

When Conor said this, something clicked. I had already identified one of the problems with epistemic spot checks as being too binary, too focused on evaluating a particular claim or book than building knowledge. The original way of note taking was a continuation of that. What I should be doing was gathering multiple sources, taking notes on equal footing, and then combining them into an actual belief using references to the claims’ bullet points. I call that a Synthesis (example). Once I had an actual belief, I could assess the focal claim in context and give it a credence (a slider from 1-10), which could be used to inform my overall assessment of the book.

Sometimes there isn’t enough information to create a Synthesis, so something is left as a Question instead (example).

Once I’d proceduralized this a bit, it felt so natural and informative I assumed everyone else would find it similarly so.  Finally you didn’t have to take my word for what was important- you could see all the evidence I’d gathered and then click through to see the context on anything you thought deserved a closer look. Surely everyone will be overjoyed that I am providing this

Feedback was overwhelming that this was much worse, no one wanted to read my Roam DB, and I should keep presenting evidence linearly.

I refuse to accept that my old way is the best way of presenting evidence and conclusions about a book or a claim. It’s too linear and contextless. I do accept that “here’s my Roam have fun” is worse. Part of my current project is to identify a third way that shares the information I want to in a way that is actually readable.

How’s that Epistemic Spot Check Project Coming?

 

Quick context: Epistemic spot checks started as a process in which I did quick investigations a few of a book’s early claims to see if it was trustworthy before continuing to read it, in order to avoid wasting time on books that would teach me wrong things. Epistemic spot checks worked well enough for catching obvious flaws (*cou*Carol Dweck*ugh*), but have a number of problems. They emphasize a trust/don’t trust binary over model building, and provability over importance. They don’t handle “severely flawed but deeply insightful” well at all. So I started trying to create something better

Below are some scattered ideas I’m playing with that relate to this project. They’re by no means fully baked, but it seemed like it might be helpful to share them. This kind of assumes you’ve been following my journey with epistemic spot checks at least a little. If you haven’t that’s fine, a more polished version of these ideas will come out eventually.

 

A parable in Three Books.

I’m currently attempting to write up an investigation of Children and Childhood in Roman Italy (Beryl Rawson) (affiliate link) (Roam notes). This is very slow going, because CaCiRI doesn’t seem to have a thesis. At least, I haven’t found one, and I’ve read almost half of the content. It’s just a bunch of facts. Often not even syntheses, just “Here is one particular statue and some things about it.” I recognize that this is important work, even the kind of work I’d use to verify another book’s claims. But as a focal source, it’s deadly boring to take notes on and very hard to write anything interesting about. What am I supposed to say? “Yes, that 11 year old did do well (without winning) in a poetry competition and it was mentioned on his funeral altar, good job reporting that.” I want to label this sin “weed based publishing” (as in, “lost in the weeds”, although the fact that I have to explain that is a terrible sign for it as a name).

One particular bad sign for Children and Childhood in Roman Italy was that I found myself copying multiple sentences at once into my notes. Direct quoting can sometimes mean “there’s only so many ways to arrange these words and the author did a perfectly good job so why bother”, but when it’s frequent, and long, it often means “I can’t summarize or distill what the author is saying”, which can mean the author is being vague, eliding over important points, or letting implications do work that should be made explicit. This was easier to notice when I was taking notes in Roam (a workflowy/wiki hybrid) because Roam pushes me to make my bullet points as self-contained as possible (so when you refer them in isolation nothing is lost), so it became obvious and unpleasant when I couldn’t split a paragraph into self contained assertions. Obviously real life is context-dependent and you shouldn’t try to make things more self-contained than they are, but I’m comfortable saying frequent long quotes are a bad sign about a book.

On the other side you have The Unbound Prometheus (David S. Landes) (affiliate link) (Roam notes), which made several big, interesting, important, systemic claims (e.g., “Britain had a legal system more favorable to industrialization than continental Europe’s”, “Europe had a more favorable climate for science than Islamic regions”), none of which it provided support for (in the sections I read- a friend tells me he gets more specific later). I tried to investigate these myself and ended up even more confused- scholars can’t even agree on whether Britain’s patent protections were strong or weak. I want to label this sin “making me make your case for you”.

A Goldilocks book is The Fate of Rome (Kyle Harper) (affiliate link) (Roam notes). Fate of Rome’s thesis is that the peak of the Roman empire corresponds with unusually favorable weather conditions in the mediteranean. It backs this up with claims about climate archeology, e.g., ice core data (claim 1, 2). This prompted natural and rewarding follow up questions like “What is ice core capable of proving?” and “What does it actually show?”. My note taking system in Roam was superb at enabling investigations of questions like these (my answer).

Based on claims creation, Against the Grain (James Scott) (affiliate link) (Roam notes) is even better. It has both interesting high level models (“settlement and states are different thing that came very far apart”, “states are entangled with grains in particular”) and very specific claims to back them up (“X was permanently settled in year Y but didn’t develop statehood hallmarks A, B, and C until year Z”). It is very easy to see how that claim supports that model, and the claim is about as easy to investigate as it can be. It is still quite possible that the claim is wrong or more controversial than the author is admitting, but it’s something I’ll be able to determine in a reasonable amount of time. As opposed to Unbound Prometheus, where I still worry there’s a trove of data somewhere that answers all of the questions conclusively and I just failed to find it.

[Against the Grain was started as part of the Forecasting project, which is currently being reworked. I can’t research its claims because that would ruin our ability to use it for the next round, should we choose to do so, so evaluation is on hold.]

If you asked me to rate these books purely on ease-of-reading, the ordering (starting with the easiest) would be:

 

  • Against the Grain
  • The Fate of Rome
  • Children and Childhood in Roman Italy
  • The Unbound Prometheus

 

Which is also very nearly the order they were published in (Against the Grain came out six weeks before Fate of Rome; the others are separated by decades). It’s possible that the two modern books were no better epistemically but felt so because they were easier to read. It’s also possible it’s a coincidence, or that epistemics have gotten better in the last 50 years.

 

Model Based Reading

As is kind of implied in the parable above, one shift in Epistemic Spot Checks is a new emphasis on extracting and evaluating the author’s models, which includes an emphasis on finding load bearing facts. I feel dumb for not emphasizing this sooner, but better late than never. I think the real trick here is not identifying that knowing a book’s models are good, but creating techniques for how to do that.

 

How do we Know This?

The other concept I’m playing with is that “what we know” is inextricable from “how we know it”. This is dangerously close to logical positivism, which I disagree with my limited understanding of. And yet it’s really improved my thinking when doing historical research.

This is a pretty strong reversal for me. I remember strongly wanting to just be told what we knew in my science classes in college, not the experiments that revealed it. I’m now pretty sure that’s scientism, not science.

 

How’s it Going with Roam?

When I first started taking notes with Roam (note spelling), I was pretty high on it. Two months later, I’m predictably loving it less than I did (it no longer drives me to do real life chores), but still find it indispensable. The big discovery is that the delight it brings me is somewhat book dependent- it’s great for Against the Grain or The Fate of Rome, but didn’t help nearly so much with Children and Childhood in Roman Italy, because it was most very on-the-ground facts that didn’t benefit from my verification system and long paragraphs that couldn’t be disambiguated.

I was running into a ton of problems with Roam’s search not handling non-sequential words, but they seem to have fixed that. Search is still not ideal, but it’s at least usable

Roam is pretty slow. It’s currently a race between their performance improvements and my increasing hoard of #Claims.

Epistemic Spot Check: Fatigue and the Central Governor Module

Epistemic spot checks used to be a series in which I read papers/books and investigated their claims with an eye towards assessing the work’s credibility. I became unhappy with the limitations of this process and am working on creating something better. This post about both the results of applying the in-development process to a particular work, and observations on the process. As is my new custom, this discussion of the paper will be mostly my conclusions. The actual research is available in my Roam database (a workflowy/wiki hybrid), which I will link to as appropriate.

This post started off as an epistemic spot check of Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis, a scientific article by Timothy David Noakes. I don’t trust myself to summarize it fairly (we’ll get to that in a minute), so here is the abstract:

An influential book written by A. Mosso in the late nineteenth century proposed that fatigue that “at first sight might appear an imperfection of our body, is on the contrary one of its most marvelous perfections. The fatigue increasing more rapidly than the amount of work done saves us from the injury which lesser sensibility would involve for the organism” so that “muscular fatigue also is at bottom an exhaustion of the nervous system.” It has taken more than a century to confirm Mosso’s idea that both the brain and the muscles alter their function during exercise and that fatigue is predominantly an emotion, part of a complex regulation, the goal of which is to protect the body from harm. Mosso’s ideas were supplanted in the English literature by those of A. V. Hill who believed that fatigue was the result of biochemical changes in the exercising limb muscles – “peripheral fatigue” – to which the central nervous system makes no contribution. The past decade has witnessed the growing realization that this brainless model cannot explain exercise performance.This article traces the evolution of our modern understanding of how the CNS regulates exercise specifically to insure that each exercise bout terminates whilst homeostasis is retained in all bodily systems. The brain uses the symptoms of fatigue as key regulators to insure that the exercise is completed before harm develops.These sensations of fatigue are unique to each individual and are illusionary since their generation is largely independent of the real biological state of the athlete at the time they develop.The model predicts that attempts to understand fatigue and to explain superior human athletic performance purely on the basis of the body’s known physiological and metabolic responses to exercise must fail since subconscious and conscious mental decisions made by winners and losers, in both training and competition, are the ultimate determinants of both fatigue and athletic performance

The easily defensible version of this claim is that fatigue is a feeling in the brain. The most out there version of the claim is that humans are capable of unlimited physical feats, held back only by their own mind, and the results of sporting events are determined beforehand through psychic dominance competitions. That sounds like I’m being unfair, so let me quote the relevant portion

[A]thletes who finish behind the winner may make the conscious decision not to win, perhaps even before the race begins. Their deceptive symptoms of “fatigue” may then be used to justify that decision. So the winner is the athlete for whom defeat is the least acceptable rationalization

(He doesn’t mention psychic dominance competitions explicitly, but it’s the only way I see to get exactly one person deciding to win each race).

This paper generated a lot of ESC-able claims, which you can see here. These were unusually crisp claims that he provided citations for: absolutely the easiest thing to ESC (having your own citations agree with your summary of them is not sufficient to prove correctness, but lack of it takes a lot works out). But I found myself unenthused about doing so. I eventually realized that I wanted to read a competing explanation instead. Luckily Noakes provided a citation to one, and it was even more antagonistic to him than he claimed.

VO2,max: what do we know, and what do we still need to know?, by Benjamin D. Levine takes several direct shots at Noakes, including:

For the purposes of framing the debate, Dr Noakes frequently likes to place investigators into two camps: those who believe the brain plays a role in exercise performance, and those who do not (Noakes et al. 2004b). However this straw man is specious. No one disputes that ‘the brain’ is required to recruit motor units – for example, spinal cord-injured patients can’t run. There is no doubt that motivation is necessary to achieve VO2,max. A subject can elect to simply stop exercising on the treadmill while walking slowly because they don’t want to continue; no mystical ‘central governor’ is required to hypothesize or predict a VO2 below maximal achievable oxygen transport in this case.

Which I would summarize as “of course fatigue is a brain-mediated feeling: you feel it.” 

I stopped reading at this point, because I could no longer tell what the difference between the hypotheses was. What are the actual differences in predictions between “your muscles are physically unable to contract?” and “your brain tells you your muscles are unable to contract”? After thinking about it for a while, I came up with a few:

  1. The former suggests that there’s no intermediate between “safely working” and “incapacitation”.
  2. The latter suggests that you can get physical gains through mental changes alone.
  3. And that this might lead to tissue damage as you push yourself beyond safe limits.

Without looking at any evidence, #1 seems unlikely to be true. Things rarely work that way in general, much less in bodies.

The strongest pieces of evidence for #2 and #3 isn’t addressed by either paper: cases when mental changes have caused/allowed people to inflict serious injuries or even death to themselves.

  1. Hysterical strength (aka mom lifts car off baby)
  2. Involuntary muscle spasms (from e.g., seizures or old-school ECT)
  3. Stiff-man syndrome.

So I checked these out.

Hysterical strength has not been studied much, probably because IRBs are touchy about trapping babies under cars (with an option on “I was unable to find the medical term for it). There are enough anecdotes that it seems likely to exist, although it may not be common. And it can cause muscle tears, according to several sourceless citations. This is suggestive, but if I was on Levine’s team I’d definitely find it insufficient.

Most injuries from seizures are from falling or hitting something, but it appears possible for injuries to result from overactive muscles themselves. This is complicated by the fact that anti-convulsant medications can cause bone thinning, and by the fact that some unknown percentage of all people are walking around with fractures they don’t know about.

Unmodified electro-convulsive therapy had a small but persistent risk of bone fractures, muscle tears, and join dislocation. Newer forms of ECT use muscle relaxants specifically to prevent this.

Stiff-man Syndrome: Wikipedia says that 10% of stiff-man syndrome patients die from acidosis or autonomic dysfunction. Acidosis would be really exciting- evidence that overexertion of muscles will actually kill you. Unfortunately when I tried to track down the citation, it went nowhere (with one paper inaccessible). Additionally, one can come up with other explanations for the acidosis than muscle exertion. So that’s not compelling.

Overall it does seem clear that (some) people’s muscles are strong enough to break their bones, but are stopped from doing so under normal circumstances. You could call this vindication for Noake’s Central Governor Model, but I’m hesitant. It doesn’t prove you can safely get gains by changing your mindset alone.  It doesn’t prove all races are determined by psychic dominance fights. Yes, Noakes was speculating when he postulated that, but without it his theory is something like “you notice when your muscles reach their limits”. When you can safely push what feel like physical limits on the margin feels like a question that will vary a lot by individual and that neither paper tried to answer.

Overall, Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis neither passed nor failed epistemic spot checks as originally conceived, because I didn’t check its specific claims. Instead I thought through its implications and investigated those, which supported the weak but not strong form of Noake’s argument.

In terms of process, the key here was feeling and recognizing the feeling that investigating forward (evaluating the implications of Noake’s arguments) was more important than investigating backwards (the evidence Noake provided for his hypothesis). I don’t have a good explanation for why that felt right at this time, but I want to track it.

Epistemic Spot Check: Unconditional Parenting

Epistemic spot checks started as a process in which I investigate a few of a book’s claims to see if it is trustworthy before continuing to read it. This had a number of problems, such as emphasizing a trust/don’t trust binary over model building, and emphasizing provability over importance. I’m in the middle of revamping ESCs to become something better. This post is both a ~ESC of a particular book and a reflection on the process of doing ESCs and what I have and should improve(d).

As is my new custom, I took my notes in Roam, a workflowy/wiki hybrid. Roam is so magic that my raw notes are better formatted there than I could ever hope to make them in a linear document like this, so I’m just going to share my conclusions here, and if you’re interested in the process, follow the links to Roam. Notes are formatted as follows:

  • The target source gets its own page
  • On this page I list some details about the book and claims it makes. If the claim is citing another source, I may include a link to the source.
  • If I investigate a claim or have an opinion so strong it doesn’t seem worth verifying (“Parenting is hard”), I’ll mark it with a credence slider. The meaning of each credence will eventually be explained here, although I’m still working out the system.
    • Then I’ll hand-type a number for the credence in a bullet point, because sliders are changeable even by people who otherwise have only read privileges.
  • You can see my notes on the source for a claim by clicking on the source in the claim
  • You may see a number to the side of a claim. That means it’s been cited by another page. It is likely a synthesis page, where I have drawn a conclusion from a variety of sources.

This post’s topic is Unconditional Parenting (Alfie Kohn) (affiliate link), which has the thesis that even positive reinforcement is treating your kid like a dog and hinders their emotional and moral development.

Unconditional Parenting failed its spot check pretty hard. Of three citations I actually researched (as opposed to agreed with without investigation, such as “Parenting is hard”), two barely mentioned the thing they were cited for as an evidence-free aside, and one reported exactly what UP claimed but was too small and subdivided to prove anything. 

Nonetheless, I thought UP might have good ideas kept reading it. One of the things Epistemic Spot Checks were designed to detect was “science washing”- the process of taking the thing you already believe and hunting for things to cite that could plausibly support it to make your process look more rigorous. And they do pretty well at that. The problem is that science washing doesn’t prove an idea is wrong, merely that it hasn’t presented a particular form of proof. It could still be true or useful- in fact when I dug into a series of self-help books, rigor didn’t seem to have any correlation with how useful they were. And with something like child-rearing, where I dismiss almost all studies as “too small, too limited”, saying everything needs rigorous peer-reviewed backing is the same as refusing to learn. So I continued with Unconditional Parenting to absorb its models, with the understanding that I would be evaluating its models for myself.

Unconditional Parenting is a principle based book, and its principles are:

  • It is not enough for you to love your children; they must feel loved unconditionally. 
  • Any punishment or conditionality of rewards endangers that feeling of being loved unconditionally.
  • Children should be respected as autonomous beings.
  • Obedience is often a sign of insecurity.
  • The way kids learn to make good decisions is by making decisions, not by following directions.

These seem like plausible principles to me, especially the first and last ones. They are, however, costly principles to implement. And I’m not even talking about things where you absolutely have to override their autonomy like vaccines. I’m talking about when your two children’s autonomies lead them in opposite directions at the beach, or you will lose your job if you don’t keep them on a certain schedule in the morning and their intrinsic desire is to watch the water drip from the faucet for 10 minutes. 

What I would really have liked is for this book to spend less time on its principles and bullshit scientific citations, and more time going through concrete real world examples where multiple principles are competing. Kohn explicitly declines to do this, saying specifics are too hard and scripts embody the rigid, unresponsive parenting he’s railing against, but I think that’s a cop out. Teaching principles in isolation is easy and pointless: the meaningful part is what you do when they’re difficult and in conflict with other things you value.

So overall, Unconditional Parenting:

  • Should be evaluated as one dude’s opinion, not the outcome of a scientific process
  • Is a useful set of opinions that I find plausible and intend to apply with modifications to my potential kids.
  • Failed to do the hard work of demonstrating implementation of its principles.
  • Is a very light read once you ignore all the science-washing.

 

 

As always, tremendous thanks to my Patreon patrons for their support.