I am really enjoying this project and plan on continuing. I have started a subproject of reading self-help-ish books that claim scientific validity, to determine what the correct standard of science is for them. Some books have clearly failed this (looking at you, Upward Spiral). Some have had very accurate citations and strong evidence for the problem they are solving, but their prescriptions have not actually been tested (Exercise for Mood and Anxiety). Others have wrong or weak evidence, and yet the prescriptions have been very helpful to me or people I know. It would a great loss to throw those out, but we can’t read every self help book just to see what works.
My solution is to create a new axis for epistemic spot checks: model quality. A high quality model has the following characteristics:
- As simple as possible, but no simpler.
- Explained well in the book (you should be able to teach it afterwords)
- Makes explicit predictions that are testable on a reasonable timescale. Ideally the predictions are novel or counterintuitive.
- Recognizes that the technique may not work and explicitly talks about when that might happen, how to recognize it, and what to do.
How do these strike people? Are there other criteria you would want information on? Other thoughts?
4 thoughts on “Epistemic Spot Check Update”
For me, the key is whether or not the author has explicitly identified the proxy measure that will act as the reinforcement and discusses getting over the key perceptual threshold where you can see the reward for yourself rather than checking in with an external source. Almost no one meets this standard, but the authors who come closer (Shinzen Young, Olivia Cabane, Philip Tetlock for examples) are much much better IMO.
Comments are closed.