I am really enjoying this project and plan on continuing. I have started a subproject of reading self-help-ish books that claim scientific validity, to determine what the correct standard of science is for them. Some books have clearly failed this (looking at you, Upward Spiral). Some have had very accurate citations and strong evidence for the problem they are solving, but their prescriptions have not actually been tested (Exercise for Mood and Anxiety). Others have wrong or weak evidence, and yet the prescriptions have been very helpful to me or people I know. It would a great loss to throw those out, but we can’t read every self help book just to see what works.
My solution is to create a new axis for epistemic spot checks: model quality. A high quality model has the following characteristics:
- As simple as possible, but no simpler.
- Explained well in the book (you should be able to teach it afterwords)
- Makes explicit predictions that are testable on a reasonable timescale. Ideally the predictions are novel or counterintuitive.
- Recognizes that the technique may not work and explicitly talks about when that might happen, how to recognize it, and what to do.
How do these strike people? Are there other criteria you would want information on? Other thoughts?