Criticism as Entertainment

Media Reviews

There is a popular genre of video that consist of shitting on other people’s work without any generative content. Let me provide some examples.

First, Cinema Sins. This is the first video I selected when looking for a movie I’d seen with a Cinema Sins I hadn’t (i.e. it’s not random, but it wasn’t selected for being particularly good or bad).

The first ten sins are:

  1. Use of a consistent brand for props in the movie they’d have to have anyway, unobtrusively enough that I never noticed until Cinema Sins pointed it out.
  2. A character being mildly unreasonable to provoke exposition.
  3. The logo
  4. Exposition that wasn’t perfectly justified in-story
  5. A convenience about what was shown on screen
  6. A font choice (from an entity that in-universe would plausibly make bad font choices)
  7. An omission that will nag at you if you think about it long enough or expect the MCU to be a completely different thing, with some information about why it happened.
  8. In-character choices that would be concerning in the real world and I would argue are treated appropriately by the movie, although reasonable people could disagree
  9. Error by character that was extremely obviously intentional on the part of the film makers. There is no reasonable disagreement on this point.
  10. An error perfectly in keeping with what we know about the character.

Of those, three to four could even plausibly be called sins of the movie- and if those bother you, maybe the MCU is not for you. The rest are deliberate choices by filmmakers to have characters do imperfect things. Everyone gets to draw their own line on characters being dumb- mine is after this movie but before 90s sitcoms running on miscommunication- but that’s irrelevant to this post because Cinema Sins is not helping you determine where a particular movie is relative to your line. Every video makes the movie sound exactly as bad as the last, regardless of the quality of the underlying movie. It’s like they analyze the dialogue sentence by sentence and look to see if there’s anything that could be criticized about it.

Pitch Meeting is roughly as useful, but instead of reacting to sentences, it’s reading the plot summary in a sarcastic tone of voice.

Pitch Meeting is at least bringing up actual problems with Game of Thrones season 8. But I dare you to tell if early Game of Thrones was better or worse than season 8, based on the Pitch Meeting.

I keep harping on “You can’t judge movie quality by the review”, but I don’t actually think that’s the biggest problem. Or rather, it’s a subset of the problem, which is you don’t learn anything from the review: not whether the reviewer considered the movie “good” or not, and not what could be changed to do make it better. Contrast with Zero Punctuation, a video game review series notorious for being criticism-as-entertainment, that nonetheless occasionally likes things, and at least once per episode displays a deep understanding of the problems of a game and what might be done to fix it.

Why Are You Talking About This?

It’s really, really easy to make something look bad, and the short-term rewards to doing so are high. You never risk looking stupid or having to issue a correction. It’s easier to make criticism funny. You get to feel superior. Not to mention the sheer joy in punishing bad things. But it’s corrosive. I’ve already covered (harped on) how useless shitting-on videos are for learning or improvement, but it goes deeper than that. Going in with those intentions changes how you watch the movie. It makes flaws more salient and good parts less so. You become literally less able to enjoy or learn from the original work.

Maybe this isn’t universal, but for me there is definitely a trade off between “groking the author’s concepts” and “interrogating the author’s concepts and evidence”. Groking is a good word here: it mostly means understand, but includes playing with the idea and applying it what I know.  That’s very difficult to do while simultaneously looking for flaws.

Should it be, though? Generating testable hypotheses should lead to greater understanding and trust or less trust, depending on the correctness of the book. So at least one of my investigation or groking procedures are wrong.

 

What we Know vs. How we Know it?

Two weeks ago I said:

The other concept I’m playing with is that “what we know” is inextricable from “how we know it”. This is dangerously close to logical positivism, which I disagree with my limited understanding of. And yet it’s really improved my thinking when doing historical research.

I have some more clarify on what I meant now. Let’s say you’re considering my ex-roommate, person P, as a roommate, and ask me for information. I have a couple of options.

Scenario 1: I turn over chat logs and video recordings of my interactions with the P. 

E.g., recordings of P playing music loudly and chat logs showing I’d asked them to stop.

Trust required: that the evidence is representative and not an elaborate deep fake.

Scenario 2: I report representative examples of my interactions with P.

E.g., “On these dates P played music really loudly even when I asked them to stop.”

Trust required: that from scenario 1, plus that I’m not making up the examples.

Scenario 3: I report summaries of patterns with P

E.g., “P often played loud music, even when I asked them to stop”

Trust required: that from scenario 2, plus my ability to accurately infer and report patterns from data.

Scenario 4: I report what a third party told me

E.g. “Mark told me they played loud music a lot”

Trust required: that from scenario 3, plus my ability to evaluate other people’s evidence

Scenario 5: I give a flat “yes good” or “no bad” answer.

E.g., “P was a bad roommate.”

Trust required: that from scenario 3 and perhaps 4, plus that I have the same heuristics for roommate goodness that you do.

 

 

The earlier the scenario, the more you can draw your own conclusions and the less trust you need to have in me. Maybe you don’t care about loud music, and a flat yes/no would drive you away from a roommate that would be fine for you. Maybe I thought I was clear about asking for music to stop but my chat logs reveal I was merely hinting, and you are confident you’ll be able to ask more directly. The more specifics I give you, the better an assessment you’ll be able to make.

Here’s what this looks like applied to recent reading:

Scenario 5: Rome fell in the 500s AD.

Even if I trust your judgement, I have no idea why you think this or what it means to you.

Scenario 4: In Rome: The Book, Bob Loblaw says Rome Fell in the 500s AD.

At least I can look up why Bob thinks this.

Scenario 3: Pottery says Rome fell between 300 and 500 AD.

Useful to experts who already know the power of pottery, but leaves newbies lost.

Scenario 2: Here are 20 dig sites in England. Those dated before 323 (via METHOD) contain pottery made in Greece (which we can identify by METHOD), those after 500 AD show cruder pottery made locally.

Great. Now my questions are “Can pottery evidence give that much precision?” and “Are you interpreting it correctly?”

Scenario 1: Please enjoy this pile of 3 million pottery shards.

Too far, too far.

 

In this particular example (from The Fall of Rome), 2-3 was the sweet spot. It allowed me to learn as much as possible with a minimum of trust. But there’s definitely room in life for 4; you can’t prove everything in every paper and sometimes it’s more efficient to offload it.

I don’t view 5 as acceptable for anything that’s trying to claim to be evidenced based, or at least, any basis besides “Try this and see if it helps you.” (which is a perfectly fine basis if it’s cheap).

 

ESC Process Notes: Detail-Focused Books

When I started doing epistemic spot checks, I would pick focal claims and work to verify them. That meant finding other sources and skimming them as quickly as possible to get their judgement on the particular claim. This was not great for my overall learning, but it’s not even really good for claim evaluation: it flattens complexity and focuses me on claims with obvious binary answers that can be evaluated without context. It also privileges the hypothesis by focusing on “is this claim right?” rather than “what is the truth?”.

So I moved towards reading all of my sources deeply, even if my selection was inspired by a particular book’s particular claim. But this has its own problems.

In both The Oxford Handbook of Children and Childhood Education in the Ancient World and Children and Childhood in Roman Italy, my notes sometimes degenerate into “and then a bunch of specifics”. “Specifics” might mean a bunch of individual art pieces, or a list of books that subtly changed a field’s framing.  This happens because I’m not sure what’s important and get overwhelmed.

Knowledge of importance comes from having a model I’m trying to test. The model can be external to the focal book (either from me, or another book), or from it. E.g. I didn’t have a a particular frame on the evolution of states before starting Against the Grain, but James C. Scott is very clear on what he believes, so I can assess how relevant various facts he presents are to evaluating that claim.

[I’m not perfect at this- e.g., in The Unbound Prometheus, the author claims that Europeans were more rational than Asians, and that their lower birth rate was evidence of this. I went along with that at the time because of the frame I was in, but looking back, I think that even assuming Europe did have a lower birth rate, it wouldn’t have proved Europeans were more rational or scientifically minded. This is a post in itself.]

If I’d come into The Oxford Handbook of Children and Childhood Education in the Ancient World or Children and Childhood in Roman Italy with a hypothesis to test, it would have been obvious information was relevant and what wasn’t. But I didn’t, so it wasn’t, and that was very tiring.

The obvious answer is “just write down everything”, and I think that would work with certain books. In particular, it would work with books that could be rewritten in Workflowy: those with crisp points that can be encapsulated in a sentence or two and stored linearly or hierarchically. There’s a particular thing both books did that necessitated copying entire paragraphs because I couldn’t break it down into individual points.

Here’s an example from Oxford Handbook…

“Pietas was the term that encompassed the dutiful respect shown by the Romans towards their gods, the state, and members of their family (Cicero Nat. Deor. 1.116; Rep. 6.16; O . 2.46; Saller 1991: 146–51; 1998). is was a concept that children would have been socialized to understand and respect from a young age. Between parent and child pietas functioned as a form of reciprocal dutiful affection (Saller 1994: 102–53; Bradley 2000: 297–8; Evans Grubbs 2011), and this combination of “duty” and “affection” helps us to understand how the Roman elite viewed and expressed their relationship with their children.”

And from Children and Childhood…

“No doubt families often welcomed new babies and cherished their children, but Roman society was still struggling to establish itself even in the second century and many military, political, and economic problems preoccupied the thoughts and activities of adult Romans”

I summarized that second one as “Families were distracted by war and such up through 0000 BC”, which is losing a lot of nuance. It’s not impossible to break these paragraphs down into constituent thoughts, but it’s ugly and messy and would involve a lot of repetition. The first mixing up what pietas is with how and who it was expressed to. The second is combining a claim about the state of Rome with the state’s effects.

This reveals that calling the two books “lists of facts” was incomplete. Lists of facts would be easier to take notes on.  These authors clearly have some concepts they are trying to convey, but because they’re not cleanly encapsulated in the author’s own mind it’s hard for me to encapsulate them. It’s like trying to lay the threads of a gordian knot in an organized fashion.

So we have two problems: books which have laid out all their facts in a row but not connected them, and books which have entwined their facts too roughly for them to be disentangled. These feel very similar to me but when I write it out the descriptions sure sound like two completely different problems.

Let me know how much sense this makes, I can’t tell if I’ve written something terribly unpolished-but-deep or screamingly shallow.

ESC Process Notes: Claim Evaluation vs. Syntheses

Forgive me if some of this is repetitive, I can’t remember what I’ve written in which draft and what’s actually been published, much less tell what’s actually novel. Eventually there will be a polished master post describing my overall note taking method and leaving out most of how it was developed, but it also feels useful to discuss the journey.

When I started taking notes in Roam (a workflowy/wiki hybrid), I would:

  1. Create a page for the book (called a Source page), with some information like author and subject (example)
  2. Record every claim the book made on that Source page
  3. Tag each claim so it got its own page
  4. When I investigated a claim, gather evidence from various sources and list it on the claim page, grouped by source

This didn’t make sense though: why did some sources get their own page and some a bullet point on a claims page? Why did some claims get their own page and some not? What happened if a piece of evidence was useful in multiple claims?

Around this time I coincidentally had a call with Roam CEO Conor White-Sullivan to demo a bug I thought I had found. There was no bug, I had misremembered the intended behavior, but this meant that he saw my system and couldn’t hide his flinch. Aside from wrecking performance, there was no need to give each claim its own page: Roam has block references, so you can point to bullet points, not just pages.

When Conor said this, something clicked. I had already identified one of the problems with epistemic spot checks as being too binary, too focused on evaluating a particular claim or book than building knowledge. The original way of note taking was a continuation of that. What I should be doing was gathering multiple sources, taking notes on equal footing, and then combining them into an actual belief using references to the claims’ bullet points. I call that a Synthesis (example). Once I had an actual belief, I could assess the focal claim in context and give it a credence (a slider from 1-10), which could be used to inform my overall assessment of the book.

Sometimes there isn’t enough information to create a Synthesis, so something is left as a Question instead (example).

Once I’d proceduralized this a bit, it felt so natural and informative I assumed everyone else would find it similarly so.  Finally you didn’t have to take my word for what was important- you could see all the evidence I’d gathered and then click through to see the context on anything you thought deserved a closer look. Surely everyone will be overjoyed that I am providing this

Feedback was overwhelming that this was much worse, no one wanted to read my Roam DB, and I should keep presenting evidence linearly.

I refuse to accept that my old way is the best way of presenting evidence and conclusions about a book or a claim. It’s too linear and contextless. I do accept that “here’s my Roam have fun” is worse. Part of my current project is to identify a third way that shares the information I want to in a way that is actually readable.

How’s that Epistemic Spot Check Project Coming?

 

Quick context: Epistemic spot checks started as a process in which I did quick investigations a few of a book’s early claims to see if it was trustworthy before continuing to read it, in order to avoid wasting time on books that would teach me wrong things. Epistemic spot checks worked well enough for catching obvious flaws (*cou*Carol Dweck*ugh*), but have a number of problems. They emphasize a trust/don’t trust binary over model building, and provability over importance. They don’t handle “severely flawed but deeply insightful” well at all. So I started trying to create something better

Below are some scattered ideas I’m playing with that relate to this project. They’re by no means fully baked, but it seemed like it might be helpful to share them. This kind of assumes you’ve been following my journey with epistemic spot checks at least a little. If you haven’t that’s fine, a more polished version of these ideas will come out eventually.

 

A parable in Three Books.

I’m currently attempting to write up an investigation of Children and Childhood in Roman Italy (Beryl Rawson) (affiliate link) (Roam notes). This is very slow going, because CaCiRI doesn’t seem to have a thesis. At least, I haven’t found one, and I’ve read almost half of the content. It’s just a bunch of facts. Often not even syntheses, just “Here is one particular statue and some things about it.” I recognize that this is important work, even the kind of work I’d use to verify another book’s claims. But as a focal source, it’s deadly boring to take notes on and very hard to write anything interesting about. What am I supposed to say? “Yes, that 11 year old did do well (without winning) in a poetry competition and it was mentioned on his funeral altar, good job reporting that.” I want to label this sin “weed based publishing” (as in, “lost in the weeds”, although the fact that I have to explain that is a terrible sign for it as a name).

One particular bad sign for Children and Childhood in Roman Italy was that I found myself copying multiple sentences at once into my notes. Direct quoting can sometimes mean “there’s only so many ways to arrange these words and the author did a perfectly good job so why bother”, but when it’s frequent, and long, it often means “I can’t summarize or distill what the author is saying”, which can mean the author is being vague, eliding over important points, or letting implications do work that should be made explicit. This was easier to notice when I was taking notes in Roam (a workflowy/wiki hybrid) because Roam pushes me to make my bullet points as self-contained as possible (so when you refer them in isolation nothing is lost), so it became obvious and unpleasant when I couldn’t split a paragraph into self contained assertions. Obviously real life is context-dependent and you shouldn’t try to make things more self-contained than they are, but I’m comfortable saying frequent long quotes are a bad sign about a book.

On the other side you have The Unbound Prometheus (David S. Landes) (affiliate link) (Roam notes), which made several big, interesting, important, systemic claims (e.g., “Britain had a legal system more favorable to industrialization than continental Europe’s”, “Europe had a more favorable climate for science than Islamic regions”), none of which it provided support for (in the sections I read- a friend tells me he gets more specific later). I tried to investigate these myself and ended up even more confused- scholars can’t even agree on whether Britain’s patent protections were strong or weak. I want to label this sin “making me make your case for you”.

A Goldilocks book is The Fate of Rome (Kyle Harper) (affiliate link) (Roam notes). Fate of Rome’s thesis is that the peak of the Roman empire corresponds with unusually favorable weather conditions in the mediteranean. It backs this up with claims about climate archeology, e.g., ice core data (claim 1, 2). This prompted natural and rewarding follow up questions like “What is ice core capable of proving?” and “What does it actually show?”. My note taking system in Roam was superb at enabling investigations of questions like these (my answer).

Based on claims creation, Against the Grain (James Scott) (affiliate link) (Roam notes) is even better. It has both interesting high level models (“settlement and states are different thing that came very far apart”, “states are entangled with grains in particular”) and very specific claims to back them up (“X was permanently settled in year Y but didn’t develop statehood hallmarks A, B, and C until year Z”). It is very easy to see how that claim supports that model, and the claim is about as easy to investigate as it can be. It is still quite possible that the claim is wrong or more controversial than the author is admitting, but it’s something I’ll be able to determine in a reasonable amount of time. As opposed to Unbound Prometheus, where I still worry there’s a trove of data somewhere that answers all of the questions conclusively and I just failed to find it.

[Against the Grain was started as part of the Forecasting project, which is currently being reworked. I can’t research its claims because that would ruin our ability to use it for the next round, should we choose to do so, so evaluation is on hold.]

If you asked me to rate these books purely on ease-of-reading, the ordering (starting with the easiest) would be:

 

  • Against the Grain
  • The Fate of Rome
  • Children and Childhood in Roman Italy
  • The Unbound Prometheus

 

Which is also very nearly the order they were published in (Against the Grain came out six weeks before Fate of Rome; the others are separated by decades). It’s possible that the two modern books were no better epistemically but felt so because they were easier to read. It’s also possible it’s a coincidence, or that epistemics have gotten better in the last 50 years.

 

Model Based Reading

As is kind of implied in the parable above, one shift in Epistemic Spot Checks is a new emphasis on extracting and evaluating the author’s models, which includes an emphasis on finding load bearing facts. I feel dumb for not emphasizing this sooner, but better late than never. I think the real trick here is not identifying that knowing a book’s models are good, but creating techniques for how to do that.

 

How do we Know This?

The other concept I’m playing with is that “what we know” is inextricable from “how we know it”. This is dangerously close to logical positivism, which I disagree with my limited understanding of. And yet it’s really improved my thinking when doing historical research.

This is a pretty strong reversal for me. I remember strongly wanting to just be told what we knew in my science classes in college, not the experiments that revealed it. I’m now pretty sure that’s scientism, not science.

 

How’s it Going with Roam?

When I first started taking notes with Roam (note spelling), I was pretty high on it. Two months later, I’m predictably loving it less than I did (it no longer drives me to do real life chores), but still find it indispensable. The big discovery is that the delight it brings me is somewhat book dependent- it’s great for Against the Grain or The Fate of Rome, but didn’t help nearly so much with Children and Childhood in Roman Italy, because it was most very on-the-ground facts that didn’t benefit from my verification system and long paragraphs that couldn’t be disambiguated.

I was running into a ton of problems with Roam’s search not handling non-sequential words, but they seem to have fixed that. Search is still not ideal, but it’s at least usable

Roam is pretty slow. It’s currently a race between their performance improvements and my increasing hoard of #Claims.

Epistemic Spot Check: Fatigue and the Central Governor Module

Epistemic spot checks used to be a series in which I read papers/books and investigated their claims with an eye towards assessing the work’s credibility. I became unhappy with the limitations of this process and am working on creating something better. This post about both the results of applying the in-development process to a particular work, and observations on the process. As is my new custom, this discussion of the paper will be mostly my conclusions. The actual research is available in my Roam database (a workflowy/wiki hybrid), which I will link to as appropriate.

This post started off as an epistemic spot check of Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis, a scientific article by Timothy David Noakes. I don’t trust myself to summarize it fairly (we’ll get to that in a minute), so here is the abstract:

An influential book written by A. Mosso in the late nineteenth century proposed that fatigue that “at first sight might appear an imperfection of our body, is on the contrary one of its most marvelous perfections. The fatigue increasing more rapidly than the amount of work done saves us from the injury which lesser sensibility would involve for the organism” so that “muscular fatigue also is at bottom an exhaustion of the nervous system.” It has taken more than a century to confirm Mosso’s idea that both the brain and the muscles alter their function during exercise and that fatigue is predominantly an emotion, part of a complex regulation, the goal of which is to protect the body from harm. Mosso’s ideas were supplanted in the English literature by those of A. V. Hill who believed that fatigue was the result of biochemical changes in the exercising limb muscles – “peripheral fatigue” – to which the central nervous system makes no contribution. The past decade has witnessed the growing realization that this brainless model cannot explain exercise performance.This article traces the evolution of our modern understanding of how the CNS regulates exercise specifically to insure that each exercise bout terminates whilst homeostasis is retained in all bodily systems. The brain uses the symptoms of fatigue as key regulators to insure that the exercise is completed before harm develops.These sensations of fatigue are unique to each individual and are illusionary since their generation is largely independent of the real biological state of the athlete at the time they develop.The model predicts that attempts to understand fatigue and to explain superior human athletic performance purely on the basis of the body’s known physiological and metabolic responses to exercise must fail since subconscious and conscious mental decisions made by winners and losers, in both training and competition, are the ultimate determinants of both fatigue and athletic performance

The easily defensible version of this claim is that fatigue is a feeling in the brain. The most out there version of the claim is that humans are capable of unlimited physical feats, held back only by their own mind, and the results of sporting events are determined beforehand through psychic dominance competitions. That sounds like I’m being unfair, so let me quote the relevant portion

[A]thletes who finish behind the winner may make the conscious decision not to win, perhaps even before the race begins. Their deceptive symptoms of “fatigue” may then be used to justify that decision. So the winner is the athlete for whom defeat is the least acceptable rationalization

(He doesn’t mention psychic dominance competitions explicitly, but it’s the only way I see to get exactly one person deciding to win each race).

This paper generated a lot of ESC-able claims, which you can see here. These were unusually crisp claims that he provided citations for: absolutely the easiest thing to ESC (having your own citations agree with your summary of them is not sufficient to prove correctness, but lack of it takes a lot works out). But I found myself unenthused about doing so. I eventually realized that I wanted to read a competing explanation instead. Luckily Noakes provided a citation to one, and it was even more antagonistic to him than he claimed.

VO2,max: what do we know, and what do we still need to know?, by Benjamin D. Levine takes several direct shots at Noakes, including:

For the purposes of framing the debate, Dr Noakes frequently likes to place investigators into two camps: those who believe the brain plays a role in exercise performance, and those who do not (Noakes et al. 2004b). However this straw man is specious. No one disputes that ‘the brain’ is required to recruit motor units – for example, spinal cord-injured patients can’t run. There is no doubt that motivation is necessary to achieve VO2,max. A subject can elect to simply stop exercising on the treadmill while walking slowly because they don’t want to continue; no mystical ‘central governor’ is required to hypothesize or predict a VO2 below maximal achievable oxygen transport in this case.

Which I would summarize as “of course fatigue is a brain-mediated feeling: you feel it.” 

I stopped reading at this point, because I could no longer tell what the difference between the hypotheses was. What are the actual differences in predictions between “your muscles are physically unable to contract?” and “your brain tells you your muscles are unable to contract”? After thinking about it for a while, I came up with a few:

  1. The former suggests that there’s no intermediate between “safely working” and “incapacitation”.
  2. The latter suggests that you can get physical gains through mental changes alone.
  3. And that this might lead to tissue damage as you push yourself beyond safe limits.

Without looking at any evidence, #1 seems unlikely to be true. Things rarely work that way in general, much less in bodies.

The strongest pieces of evidence for #2 and #3 isn’t addressed by either paper: cases when mental changes have caused/allowed people to inflict serious injuries or even death to themselves.

  1. Hysterical strength (aka mom lifts car off baby)
  2. Involuntary muscle spasms (from e.g., seizures or old-school ECT)
  3. Stiff-man syndrome.

So I checked these out.

Hysterical strength has not been studied much, probably because IRBs are touchy about trapping babies under cars (with an option on “I was unable to find the medical term for it). There are enough anecdotes that it seems likely to exist, although it may not be common. And it can cause muscle tears, according to several sourceless citations. This is suggestive, but if I was on Levine’s team I’d definitely find it insufficient.

Most injuries from seizures are from falling or hitting something, but it appears possible for injuries to result from overactive muscles themselves. This is complicated by the fact that anti-convulsant medications can cause bone thinning, and by the fact that some unknown percentage of all people are walking around with fractures they don’t know about.

Unmodified electro-convulsive therapy had a small but persistent risk of bone fractures, muscle tears, and join dislocation. Newer forms of ECT use muscle relaxants specifically to prevent this.

Stiff-man Syndrome: Wikipedia says that 10% of stiff-man syndrome patients die from acidosis or autonomic dysfunction. Acidosis would be really exciting- evidence that overexertion of muscles will actually kill you. Unfortunately when I tried to track down the citation, it went nowhere (with one paper inaccessible). Additionally, one can come up with other explanations for the acidosis than muscle exertion. So that’s not compelling.

Overall it does seem clear that (some) people’s muscles are strong enough to break their bones, but are stopped from doing so under normal circumstances. You could call this vindication for Noake’s Central Governor Model, but I’m hesitant. It doesn’t prove you can safely get gains by changing your mindset alone.  It doesn’t prove all races are determined by psychic dominance fights. Yes, Noakes was speculating when he postulated that, but without it his theory is something like “you notice when your muscles reach their limits”. When you can safely push what feel like physical limits on the margin feels like a question that will vary a lot by individual and that neither paper tried to answer.

Overall, Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis neither passed nor failed epistemic spot checks as originally conceived, because I didn’t check its specific claims. Instead I thought through its implications and investigated those, which supported the weak but not strong form of Noake’s argument.

In terms of process, the key here was feeling and recognizing the feeling that investigating forward (evaluating the implications of Noake’s arguments) was more important than investigating backwards (the evidence Noake provided for his hypothesis). I don’t have a good explanation for why that felt right at this time, but I want to track it.

Epistemic Spot Check: Unconditional Parenting

Epistemic spot checks started as a process in which I investigate a few of a book’s claims to see if it is trustworthy before continuing to read it. This had a number of problems, such as emphasizing a trust/don’t trust binary over model building, and emphasizing provability over importance. I’m in the middle of revamping ESCs to become something better. This post is both a ~ESC of a particular book and a reflection on the process of doing ESCs and what I have and should improve(d).

As is my new custom, I took my notes in Roam, a workflowy/wiki hybrid. Roam is so magic that my raw notes are better formatted there than I could ever hope to make them in a linear document like this, so I’m just going to share my conclusions here, and if you’re interested in the process, follow the links to Roam. Notes are formatted as follows:

  • The target source gets its own page
  • On this page I list some details about the book and claims it makes. If the claim is citing another source, I may include a link to the source.
  • If I investigate a claim or have an opinion so strong it doesn’t seem worth verifying (“Parenting is hard”), I’ll mark it with a credence slider. The meaning of each credence will eventually be explained here, although I’m still working out the system.
    • Then I’ll hand-type a number for the credence in a bullet point, because sliders are changeable even by people who otherwise have only read privileges.
  • You can see my notes on the source for a claim by clicking on the source in the claim
  • You may see a number to the side of a claim. That means it’s been cited by another page. It is likely a synthesis page, where I have drawn a conclusion from a variety of sources.

This post’s topic is Unconditional Parenting (Alfie Kohn) (affiliate link), which has the thesis that even positive reinforcement is treating your kid like a dog and hinders their emotional and moral development.

Unconditional Parenting failed its spot check pretty hard. Of three citations I actually researched (as opposed to agreed with without investigation, such as “Parenting is hard”), two barely mentioned the thing they were cited for as an evidence-free aside, and one reported exactly what UP claimed but was too small and subdivided to prove anything. 

Nonetheless, I thought UP might have good ideas kept reading it. One of the things Epistemic Spot Checks were designed to detect was “science washing”- the process of taking the thing you already believe and hunting for things to cite that could plausibly support it to make your process look more rigorous. And they do pretty well at that. The problem is that science washing doesn’t prove an idea is wrong, merely that it hasn’t presented a particular form of proof. It could still be true or useful- in fact when I dug into a series of self-help books, rigor didn’t seem to have any correlation with how useful they were. And with something like child-rearing, where I dismiss almost all studies as “too small, too limited”, saying everything needs rigorous peer-reviewed backing is the same as refusing to learn. So I continued with Unconditional Parenting to absorb its models, with the understanding that I would be evaluating its models for myself.

Unconditional Parenting is a principle based book, and its principles are:

  • It is not enough for you to love your children; they must feel loved unconditionally. 
  • Any punishment or conditionality of rewards endangers that feeling of being loved unconditionally.
  • Children should be respected as autonomous beings.
  • Obedience is often a sign of insecurity.
  • The way kids learn to make good decisions is by making decisions, not by following directions.

These seem like plausible principles to me, especially the first and last ones. They are, however, costly principles to implement. And I’m not even talking about things where you absolutely have to override their autonomy like vaccines. I’m talking about when your two children’s autonomies lead them in opposite directions at the beach, or you will lose your job if you don’t keep them on a certain schedule in the morning and their intrinsic desire is to watch the water drip from the faucet for 10 minutes. 

What I would really have liked is for this book to spend less time on its principles and bullshit scientific citations, and more time going through concrete real world examples where multiple principles are competing. Kohn explicitly declines to do this, saying specifics are too hard and scripts embody the rigid, unresponsive parenting he’s railing against, but I think that’s a cop out. Teaching principles in isolation is easy and pointless: the meaningful part is what you do when they’re difficult and in conflict with other things you value.

So overall, Unconditional Parenting:

  • Should be evaluated as one dude’s opinion, not the outcome of a scientific process
  • Is a useful set of opinions that I find plausible and intend to apply with modifications to my potential kids.
  • Failed to do the hard work of demonstrating implementation of its principles.
  • Is a very light read once you ignore all the science-washing.

 

 

As always, tremendous thanks to my Patreon patrons for their support.

 

What Comes After Epistemic Spot Checks?

When I read a non-fiction book, I want to know if it’s correct before I commit anything it says to memory. But if I already knew the truth status of all of its claims, I wouldn’t need to read it. Epistemic Spot Checks are an attempt to square that circle by sampling a book’s claims and determining their truth value, with the assumption that the sample is representative of the whole.

Some claims are easier to check than others. On one end are simple facts, e.g., “Emperor Valerian spent his final years as a Persian prisoner”. This was easy and quick to verify: googling “emperor valerian” was quite sufficient. “Roman ship sizes weren’t exceeded until the 15th century” looks similar, but it wasn’t. If you google the claim itself, it will look confirmed (evidence: me and 3 other people in the forecasting tournament did this). At the last second while researching this, I decided to check the size of Chinese ships, which surpassed Roman ships sooner than Europe did, by about a century.

On first blush this looks like a success story, but it’s not. I was only able to catch the mistake because I had a bunch of background knowledge about the state of the world. If I didn’t already know mid-millenium China was better than Europe at almost everything (and I remember a time when I didn’t), I could easily have drawn the wrong conclusion about that claim. And following a procedure that would catch issues like this every time would take much more time than ESCs currently get.

Then there’s terminally vague questions, like “Did early modern Europe have more emphasis on rationality and less superstition than other parts of the world?” (As claimed by The Unbound Prometheus). It would be optimistic to say that question requires several books to answer, but even if that were true, each of them would need at least an ESC themselves to see if they’re trustworthy, which might involve checking other claims requiring several books to verify… pretty soon it’s a master’s thesis.

But I can’t get a master’s degree in everything interesting or relevant to my life. And that brings up another point: credentialism. A lot of ESC revolves around “Have people who have officially been Deemed Credible sanctioned this fact?” rather than “Have I seen evidence that I, personally, judge to be compelling?” 

The Fate of Rome (Kyle Harper) and The Fall of Rome (Bryan Ward-Perkins) are both about the collapse of the western Roman empire. They both did almost flawlessly on their respective epistemic spot checks. And yet, they attribute the fall of Rome to very different causes, and devote almost no time to the others’ explanation. If you drew a venn diagram of the data they discuss, the circles would be almost but not quite entirely distinct. The word “plague” appears 651 times in Fate and 6 times in Fall, who introduces the topic mostly to dismiss the idea that it was causally related to the fall- which is how Fate treats all those border adjustments happening from 300 AD on. Fate is very into discussing climate, but Fall uses that space to talk about pottery.

This is why I called the process epistemic spot checking, not truth-checking. Determining if a book is true requires not only determining if each individual claim is true, but what other explanations exist and what has been left out. Depending on the specifics, ESC as I do them now are perhaps half the work of reading the subsection of the book I verify. Really thoroughly checking a single paragraph in a published paper took me 10-20 hours. And even if I succeed at the ESC, all I have is a thumbs up/thumbs down on the book.

Around the same time I was doing ESCs on The Fall of Rome and The Fate of Rome (the ESCs were published far apart to get maximum publicity for the Amplification experiment, but I read and performed the ESCs very close together), I was commissioned to do a shallow review on the question of “How to get consistency in predictions or evaluations of questions?” I got excellent feedback from the person who commissioned it, but I felt like it said a lot more about the state of a field of literature than the state of the world, because I had to take authors’ words for their findings. It had all the problems ESCs were designed to prevent.

I’m in the early stages of trying to form something better: something that incorporates the depth of epistemic spot checks with the breadth of field reviews. It’s designed to bootstrap from knowing nothing in a field to being grounded and informed and having a firm base on which to evaluate new information. This is hard and I expect it to take a while. 

 

Epistemic Spot Checks: The Fall of Rome

Introduction

Epistemic spot checks are a series in which I select claims from the first few chapters of a book and investigate them for accuracy, to determine if a book is worth my time. This month’s subject is The Fall of Rome, by Bryan Ward-Perkins, which advocates for the view that Rome fell, and it was probably a military problem.

Like August’s The Fate of Rome, this spot check was done as part of a collaboration with Parallel Forecasting and Foretold, which means that instead of resolving a claim as true or false, I give a confidence distribution of what I think I would answer if I spent 10 hours on the question (in reality I spent 10-45 minutes per question). Sometimes the claim is a question with a numerical answer, sometimes it is just a statement and I state how likely I think the statement is to be true.

This spot check is subject to the same constraints as The Fate of Rome, including:

  1. Some of my answers include research from the forecasters, not just my own.
  2. Due to our procedure for choosing questions, I didn’t investigate all the claims I would have liked to.

Claims

Claim made by the text:  “[Emperor Valerian] spent the final years of his life as a captive at the Persian Court”
Question I answered: what is the chance that is true?
My answer: I estimate a chance of (99 – 3*lognormal(0,1)) that Emperor Valerian was captured by the Persians and spent multiple years as a prisoner before dying in captivity.

You don’t even have to click on the Wikipedia page to confirm this is the common story: it’s in the google preview for “emperor valerian”. So the only question is the chance that all of history got this wrong. Wikipedia lists five primary sources, of which I verified three.  https://www.ancient-origins.net/history/what-really-happened-valerian-was-roman-emperor-humiliated-and-skinned-hands-enemy-008598 raises questions about how badly Valerian was treated, but not that he was captive.

My only qualm is the chance that this could be a lie perpetuated at the time. Maybe Valerian died and the Persians used a double, maybe something weirder happened. System 2 says the chance of this is < 10% but gut says < 15%.

 

Claim made by the text: “What had totally disappeared, however, were the good-quality, low-value items, made in bulk, and available so widely in the Roman period”
Question I answered: What is the chance mass-produced, low-value items available so widely in the Roman period, disappear in Britain by 600 AD?
My answer: I estimate a chance of (64 to 93, normal distribution) that mass-produced, low-value items were available in Britain during Roman rule and not after 600 AD.

This was one of the hardest claims to investigate, because it represents original research by Ward-Perkins. I had basically given up on answering this without a pottery PhD until google suggestions gave me the perfect article.

This is actually a compound claim by Ward-Perkins: 

  1. Roman coinage and mass-produced, low-cost, high-quality pottery disappeared from Britain and then the rest of post-Roman Europe.
  2. The state of pottery and coinage is a good proxy for the state of goods and trades as a whole, because they preserve so amazingly well and are relatively easy to date.

Data points:

    • Focuses on how amphorae were never really abundant in Britain
    • Chart stops at 400 AD
    • Graph showing large drops in amphorae distribution by 410 AD

If we believe Ward-Perkins and Brewminate, I estimate the chances that pottery massively declined at 95-99,  times 80-95 that other good declined with them. There remains the chances that the historical record is massively misleading (very unlikely with pots, although I don’t know how likely it is to have missed sites entirely), and that W-P et al are misinterpreting the record. I would be very surprised if so many sites had been missed as to invalidate this data, call it 5-15%. Gut feeling, 5-20% chance the W-P crowd are exaggerating the data, but given the absence of challenges, not higher than that and not a significant chance they’re just making shit up.

(95 to 99)*(85 to 95) * (80 to 95) = 64 to 93%

 

Claim made by the text: The Romans had mass literacy, which declined during the Dark Ages.
Question I answered: “[% population able to read at American 1st grade level during Imperial Rome] – [% population able to do same in the same geographic area in 1000 AD] = N%. What is N?”
My answer: I estimate that there is a 95% chance [Roman literacy] – [Dark Ages literacy] = (0 to 60, normal distribution) 

Data Points:

 

The highest estimate of literacy in Roman Empire I found is 30%.  Call it twice that for ability to read at a 1st grade level in cities. So the range is 5%-60%. 

The absolute lowest the European 1000AD literacy rate could be is 0; the highest estimate is 5% (and that was in the 1300s, which were probably more literate).  From the absence of graffiti I infer that even minimal literacy achievement dropped a great deal. 

Maximum = 60%-1% = 59%
Minimum = 5%-5%=0

 

Claim made by the text: “What some people describe as “the invasion of Rome by Germanic barbarians”, Walter Goffart describes as “the Romans incorporating the Germanic tribes into their citizenry and setting them up as rulers who reported to the empire.” and “Rome did fall, but only because it had voluntarily delegated its own power, not because it had been successfully invaded”.”
Question I answered: What is my confidence that this accurately represents historian Walter Goffart’s views?
My answer: I estimate that after 10 hours of research, I would be 68-92% confident this describes Goffart’s views accurately.

Data points:

  • https://blog.oup.com/2005/12/the_fall_of_rom/
    • Peter Heather: The most influential statement of this, perhaps, is Walter Goffart’s brilliant aphorism that the fall of the Western Empire was just ‘an imaginative experiment that got a little out of hand’. Goffart means that changes in Roman policy towards the barbarians led to the emergence of the successor states, dependant on barbarian military power and incorporating Roman institutions, and that the process which brought this out was not a particularly violent one.
  • https://www.goodreads.com/book/show/1680215.Barbarians_and_Romans_A_D_418_584?from_search=true 
    • Despite intermittent turbulence and destruction, much of the Roman West came under barbarian control in an orderly fashion.”
  • https://press.princeton.edu/titles/1036.html
    • Despite intermittent turbulence and destruction, much of the Roman West came under barbarian control in an orderly fashion. Goths, Burgundians, and other aliens were accommodated within the provinces without disrupting the settled population or overturning the patterns of landownership. Walter Goffart examines these arrangements and shows that they were based on the procedures of Roman taxation, rather than on those of military billeting (the so-called hospitalitas system), as has long been thought. Resident proprietors could be left in undisturbed possession of their lands because the proceeds of taxation,rather than land itself, were awarded to the barbarian troops and their leaders.”
  • https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1478-0542.2008.00523.x
    • “the barbarians and Rome, instead of being in constant conflict with each other, occupied a joined space, a single world in which both were entitled to share. What we call the barbarian invasions was primarily a drawing of foreigners into Roman service, a process sponsored, encouraged, and rewarded by Rome. Simultaneously, the Romans energetically upheld their supremacy. Many barbarian peoples were suppressed and vanished; the survivors were persuaded and learned to shoulder Roman tasks. Rome was never discredited or repudiated. The future endorsed and carried forward what the Empire stood for in religion, law, administration, literacy, and language.”
  • https://books.google.com/books/about/Rome_s_Fall_and_After.html?id=55pDIwvWnpoC “Rome’s Fall and After” indicates Goffat does believe Rome fell. But suggests its main problem was constantinople, not interactions with barbarians at all. So top percentage correct = 90%)

 

This seems pretty conclusive that Goffart thought Barbarians were accommodated rather than conquered the area (so my minimum estimate that the summary was correct must be greater than 50%). However it’s not clear how much power he thought they took, or whether rome fell at all. This could be a poor restatement, or it could be that if I read Goffart’s actual work and not just book jacket blurbs I’d agree.

 

Question I answered: Chance Elizabeth would recommend this book as a reliable source on the topic to an interested friend, if they asked tomorrow (8/31/19)?
My answer: There is a (91-99%, normal distribution) chance I would recommend this to a friend.

99% is in range, because I definitely think it’s worth reading if they’re interested in the topic. I think I’d recommend it before Fate of Rome, because it establishes that rome fell more concretely.

Is there a chance I wouldn’t recommend it?

  • They could have already read it
  • They could be more interested in disease and climate change (in which case I’d recommend Fate)
  • I could forget about it
  • I could not want to take responsibility for their reading.
  • I could be unconfident that Fall was better than what they’d find by chance.
    • This feels like the biggest one.
    • But the question doesn’t say “best book”, it just says “reliable source”
    • Only real qualm on that is that is normal history book qualms

So the minimum is 91%

 

Bonus Claims

These are the claims I didn’t check, but other people made predictions on how I would guess. Note that at this point the predictions haven’t been very accurate- whether they’re net positive depends on how you weight the questions. And Foretold is beta software that hasn’t prioritized export yet, so I’m using *shudder* screen shots. But for the sake of completeness:

Claim made by the text: The Fall of Rome: Roman Pottery pre-400AD was high quality and uniform.
Predicted answer: 29.9% to 63.5% chance this claim is correct

Claim made by the text: “In Britain new coins ceased to reach the island, except in tiny quantities, at the beginning of the fifth century”
Predicted answer: 31.6% to 94% chance this claim is correct

 

Claim made by the text: The Fall of Rome: [average German soldiers’ height] – [average Roman soldiers’ height] = N feet. What is N? .
Predicted answer: -0.107 to 0.61 ft.

 

Claim made by the text: The Romans chose to cede local control of Gaul to the Germanic tribes in the 400s, as opposed to losing them in a military conquest.
Predicted answer: 28.5% to 85.6% chance this claim is correct

 

Claim made by the text: The Germanic tribes who took over local control of Gaul in the 400s reported to the Emperor.
Predicted answer: 4.77% to 50.9% chance this claim is correct

 

Conclusion

The Fall of Rome did very well on spot-checking- no outright disagreements at all, just some uncertainties. 

On the other hand, The Fall of Rome barely mentions disease and doesn’t mention climate change at all, which my previous book, The Fate of Rome, claimed to be the main causes of the fall. The Fate of Rome did almost as well in epistemic spot checking as Fall, yet they can’t both be correct. What’s going on? I’m going to address that in a separate post, because I want to be able to link to it without forcing people to read this entire spot check.

In terms of readability, Fall starts slowly but the second half is by far the most interested I have ever been in pottery or archeology.

[Many thanks to my Patreon patrons and Parallel Forecast for financial support for this post]

Does combining epistemic spot checks and prediction markets sound super fun to you? Good news: We’re launching round three of the experiment today, with prizes of up to $65/question. The focal book will be The Unbound Prometheus, by David S. Landes, on the Industrial Revolution. The market opens today and will remain open until 10/27 (inclusive).

 

Epistemic Spot Check: The Fate of Rome (Kyle Harper)

Introduction

Epistemic spot checks are a series in which I select claims from the first few chapters of a book and investigate them for accuracy, to determine if a book is worth my time. This month’s subject is The Fate of Rome, by Kyle Harper, which advocates for the view that Rome was done in by climate change and infectious diseases (which were exacerbated by climate change).

This check is a little different than the others, because it arose from a collaboration with some folks in the forecasting space. Instead of just reading and evaluating claims myself, I took claims from the book and made them into questions on a prediction market, for which several people made predictions of what my answer would be before I gave it. In some but not all cases I read their justifications (although not numeric estimates) before making my final judgement.

I expect we’ll publish a post-mortem on that entire process at some point, but for now I just want to publish the actual spot check. Because of the forecasting crossover, this spot check will differ from those that came before in the following ways:

  1. Claims are formatted as questions answerable with a probability. If a claim lacks a question mark, the implicit question is “what is the probability this is true?”.
  2. Questions have a range of specificity, to allow us to test what kind of ambiguities we can get away with (answer: less than I used).
  3. Some of my answers include research from the forecasters, not just my own.
  4. Due to timing issues, I finished the book and a second on the topic before I did the research for spot check.
  5. Due to our procedure for choosing questions, I didn’t investigate all the claims I would have liked to.

 

Claims

Original Claim: “Very little of Roman wealth was due to new technological discoveries, as opposed to diffusion of existing tech to new places, capital accumulation, and trade.”
Question: What percentage of Rome’s gains came from technological gains, as opposed to diffusion of technical advantages, capital accumulation, and trade?

1%-30% log distribution

Data:

  • The Fall of Rome talks extensively about how trade degraded when the Romans left and how that lowered the standard of living.
  • https://brilliantmaps.com/roman-empire-gdp/ shows huge differences in GDP by region, implying there was a big opportunity to grow GDP through trade and diffusion of existing tech. That means potential growth just from catch up growth was > 50%.
  • Wikipedia doesn’t even show growth in GDP per capita (with extremely wide error bars) from 14AD to 150AD.
  • Rome did have construction and military tech (https://en.wikipedia.org/wiki/Roman_technology)
  • It also seems likely that expansion created a kind of Dutch disease, in which capable, ambitious people were drawn to fighting and/or politics, and not discovering new tech.
  • One potential place where Roman technology could have contributed greatly to the economy was lowering disease via sanitation infrastructure. According to Fate of Rome and my own research, this didn’t happen; sanitation was not end to end and therefor you had all the problems inherent in city living.

Original Claim: “The blunt force of infectious disease was, by far, the overwhelming determinant of a mortality regime that weighed heavily on Roman demography”
Question: Even during the Republic and successful periods of the empire, disease burden was very high in cities.

60%-90% normal distribution

The wide spread and lack of inclusion of 100% in the confidence interval stem from the lack of precision in the question. What distinguishes “high” from “very high”, and are we counting diseases of malnutrition or just infectious ones? I expected to knock this one out in two minutes, but ended up feeling the current estimates of disease mortality lack the necessary precision to answer it.

Data:

 

Original Claim: “The main source of population growth in the Roman Empire was not a decline in mortality but, rather, elevated levels of fertility”
Question: When Imperial Rome’s population was growing, it was due to a decline in death rates, rather than elevated fertility.

80-100%, c – log distribution

“Elizabeth, that rephrase doesn’t look much like that original claim” you might be saying quietly to yourself. You are correct- I misread the claim in the book, at least twice, and didn’t catch it until this write-up. This isn’t as bad as it seems. The claims are not quite opposite, because my rephrase was trying to explain variation in growth within Rome, and the book was trying to explain absolute levels, or possibly the difference relative to today.

Back when he was doing biology, Richard Dawkins had a great answer to the common question “how much is X due to genetics, as opposed to environment?”. He said asking that is like asking how much of a rectangle’s area is due to its length, as opposed to its width. It’s a nonsensical question. But you assign proportionate responsibility for the change in area between two rectangles.

Fate‘s original claim was much like asking how much of a trait is due to genetics. This is bad and it should feel bad, but it’s a very common mistake, and I give Fate a lot of credit for providing the underlying facts such that I could translate it into the “what causes differences between things” question without even noticing.

Since weak framing wasn’t a systemic problem in the book and it presented the underlying facts well enough for me to form my own, correct, model, I’m not docking Fate very harshly on this one.

Original Claim: “The size of Roman merchant ships was not exceeded until the 15th century, and the grain ships were not surpassed until the 19th.”
Question: “The size of Roman merchant ships was not exceeded until the 15th century, and the grain ships were not surpassed until the 19th.”

0-10% log distribution.

This is true within the Mediterranean, but if  you check Chinese ships it’s obvious it’s off by at least 100 years, possibly more.

Original Claim: too diffuse to quote.
Question: The Roman Empire suffered greatly from intense epidemics, more so than did the Republic or 700-1000 AD Europe.

90-100% c – log distribution

https://en.wikipedia.org/wiki/List_of_epidemics shows a pretty clear presence of epidemics in the relevant period and absence in the others.

 

Original Claim: too diffuse to quote.
Question: Starvation was not a big concern in Imperial Rome’s prime.

80-100% c – log distribution

https://en.wikipedia.org/wiki/List_of_famines shows Roman famine in 441 BC (the Republic) and isolated famines from 370 on, but pretty much validates that during the prime empire, mass starvation was not a threat.

Conclusion:

My fact checking found two flaws:

  1. An inaccuracy in when ships that exceeded the size of Roman trade ships were built, and/or forgetting China was a thing. The inaccuracy does not invalidate the author’s point, which is that the Romans had better shipping technology than the cultures that followed them.
  2. Bad but extremely common framing for the relative effects of disease mortality vs. birth rates.

These is well within tolerances for things a book might get wrong. I’m happy I read this book, and would read another by the same author (with perhaps more care when it refers to happenings outside of Europe), but they are not jumping to the of my list.

Is The Fate of Rome correct in its thesis that Rome was brought down by climate change and disease? I don’t know. It certainly seems plausible, but is clearly advocating for a position rather than trying to present all the relevant facts. There are obvious political implications to Fate even if it doesn’t spell them out, so I would want to read at least one of the 80 million other books on the Fall of Rome before I developed an opinion. I’m told some people think it had to do with something military, which Fate barely deigns to mention. In the future I hope to be a good enough prediction-maker to put a range on this anyways, however wide it must be, but for now I’m succumbing to the siren song of “but you could just get more data”.

[Many thanks to my Patreon patrons and Parallel Forecast for financial support for this post]

PS. This book is the first step of an ongoing experiment with epistemic spot checks and prediction markets. If you would like to participate in or support these experiments, please e-mail me at elizabeth-at-this-domain-name. The next round is planned to start Saturday August 24th.