Predictions As A Substitute For Reviews

Update: You can see how this worked out after 6 months here. Short version: it did not escape the weekly review trap.


There is a lot of praise among my friends for weekly reviews. There is noticeably less doing of weekly reviews, even among the people doing the praising. It’s a very hard habit to keep up. We could spend a lot of time diving into why that is and how to fix it in a systematic way… or I could tell you how I use PredictionBook to get many of the promised benefits of weekly reviews without any willpower.

In a nutshell, when I find myself making a choice about how to spend time or money that’s dependent on some expectation, I write out the expectation as a prediction (with % likelihood) in PredictionBook, which then automatically prompts me to evaluate the prediction at a date I set. This has so many benefits I’m struggling to figure out where to start. If I had to sum them up in a phrase, it would be “more contact with reality”. But to expand on that…

What contact with reality may look like


Making the prediction forces me to assess my anticipated outcomes and do an expected value calculation. This in turn forces me to be explicit about what I value, and how I will know if I got it. It also makes it explicit when I’m doing something because I expect the median or modal outcome to be good, vs. when I’m doing it for the long tail of unlikely but super good outcomes. It also gives me a chance to say “huh, that EV is not competitive” and do something else.

Evaluating the prompt gives me more information on how my plans are working out. Andy Matuschak talks about the dilemma of knowledge work, where you can’t make a plan for success but leaving things open ended often leads nowhere. His solution to this is to give himself unstructured time, but then look back and see if it worked out the way he wanted, and if it didn’t, do something different next time. Predictions provide a really natural, lightweight prompt for this reflection.

Sometimes I withdraw a question for being poorly phrased- often when my answer makes the decision sound “bad” but I feel like it was actually “good”, or vice versa. This is an easy way to notice I was wrong about what I valued and look for what is actually driving my decisions.

Then there are the framing benefits. It’s easier to view something as an experiment if you’re explicitly writing down “5% chance of success”, which makes failure feel less bad. I didn’t fail to make something work, I executed the correct algorithm and it failed to produce results this particular time.

Making an explicit prediction and writing it down closes open loops in my head, freeing RAM for other things.

It’s helpful to notice when the habit doesn’t trigger. For example, I have a meeting planned today (when I’m writing this) that I just “didn’t feel inspired” to make a prediction for. When I looked closer I realized that’s because I was kind of eh on this meeting but felt obliged to take it, and was avoiding that fact. I don’t know how this is going to work out because I’m choosing to work on this blog post before tackling that, but I still credit the prediction habit with noticing, which is the first step towards better choices.

Also it seems like this might make my predictions more accurate, which has all kinds of applications. But to be honest that’s not what’s reinforcing this.


So we’re on the same page, here are some sample predictions, and the decisions that rested on them:

  • Reading Pavlov and his School will lead to behavior change in the area of learning or sleep (20%)

    • Should I buy a physical copy of Pavlov and his School and spend the time to read it?

  • Talking with my friend Jane on 8/1 will be energizing (95%)

    • Should I take a call with Jane?

  • Bob the recruiter will describe a job that I am at all capable of and interested in doing (3%)

    • Should I take a call with Bob?

  • I will judge the seminar on 7/28 will be worth the interruption of flow (40%)

    • Should I spend money and time on this seminar?

  • Sam will finish task Y by 7/31 (20%)

    • No immediate decision riding on this one, but it sure seems useful to calibrate on how well I can predict a project partner’s productivity.

  • I will play with an Oculus Quest at least an hour a week (98%)

    • Should I buy a Quest?

  • California will catch fire to the point I need to keep my windows closed (95%)

    • Should I buy an air conditioner?

  • Company Z will offer me at least an hour of work (15%)

    • No immediate decision

  • Supplement S will improve my sleep to the point I don’t need a nap (3%)

    • Should I buy and use supplement S?


If I were starting from scratch, here are the instructions I would want to receive (many of which I did receive, from Raemon)

  1. Create an account on

  2. Go to Settings to correct your time zone and set your prediction default to “visible to creator” rather than “visible to public” (unless you’d like them to be public by default).

  3. Create a handful of predictions that will resolve over the next week (PredictionBook will e-mail you when they should be resolved).

    1. The goal is twofold:

      1. Quickly get feedback about how this feels for you.

      2. Get yourself in the habit of making and resolving predictions.

    2. When making predictions, try to hone in on things that are decision-relevant to you; that you would do something different based on if the prediction was true or not.

  4. Create a link to in your browser so you can access it easily.

  5. As you take more actions based on predictions (implicit or explicit), notice that you are doing so, and register the predictions in PredictionBook

  6. Resolve predictions as you are prompted to do so.

  7. After a week or two, check in with yourself and how you feel about the project. For calibration: I found creating the initial predictions kind of a pain but by a week in the project was naturally rewarding and required no will power on my part.

This should take < 30 minutes, and it should take < 45 minutes total to get a good sense of whether this is working for you.

Why Does This Work?

I’ve been thinking a lot about flow and distraction recently. One thing I’ve noticed is that it takes an awful long time, measured in hours, to get into the mindset for certain tasks. Those tasks then feel amazing, unless I’m pulled out of them prematurely, which hurts (and makes it take even longer to get into the next time). There’s a lot of implications of this that I’ll hopefully get to in some other post; the relevance here is that I think weekly reviews might be one of those tasks that requires a lot of time to get into the required headspace and hurts to leave prematurely. This makes them costly- much more costly then I would have estimated before I learned to bill tasks for their prep time. It’s also something of an all or nothing task- doing 50% of your weekly review does not get you nearly 50% of the benefits of a full weekly review.

But predictions, both making and evaluating, integrate into my life pretty naturally most of the time, and scale gracefully. Writing them up can take as little as 15 seconds, and when it takes longer, it’s because I’ve discovered something important I need to work out. Doing this fits in naturally with the process of making plans, so I don’t need to spend time getting into the right head space. Evaluating them is usually trivial- and if it’s not, it’s highlighting a problem with my models.

Tips and Tricks

“I will enjoy X” is very rarely the right prediction for me to make, because enjoyment is a tricky thing for me. I don’t like admitting I didn’t enjoy something, and sometimes I will burn a lot of energy trying to make something enjoyable, which can be a bad decision even if it works. My equivalent is “X will be energizing”

When trying to create predictions, think about what you’ll be spending money and time on in the next week. How do you think those things are going to go? What outcomes would make you change your decision?


My original inspiration for this experiment, Raemon, hasn’t gotten the benefits I describe. He’s more focused on improving his prediction calibration and accuracy, and so far hasn’t done as much decision-relevant predictions. I’ve encouraged him to try it my way and hopefully he’ll comment after he’s spent a few weeks on that.

PredictionBook is pleasantly lightweight but it’s already a little cluttered with predictions. If I really do solidify this habit I may need to find or create a better system.

I’ve been doing this for about a month. People have kept up weekly reviews for longer before letting them fall by the wayside. I’m writing this up now because the act of solidifying a habit makes me worse at writing about it. I’ve set a reminder for 12/1 to write an update.

Thanks to Raemon for pushing me to a better version this post and for initiating the experiment in the first place.

This is pretty inside baseball and I suspect boring to most readers. You have my blessing to skip it.

Two weeks ago I published Knowledge Boostrapping v0.1, an algorithm for turning questions into answers. Since then I’ve gotten a moderate amount of feedback, a major theme of which is lack of clarity in certain sections. That’s entirely fair, but not something I immediately know how to fix. In an attempt to make some headway on the problem, I recorded in detail the steps I took in a recent attempt at using a method.

This isn’t really a central case of use- the question was of business norms, not The Truth, and the research I did more primed me to think of the solution myself, more than it provided the answer. But non-central examples are sometimes more useful and the cost to sharing is low given, so here we go.



  1. 12:30 Working on a project with someone, get to the point where we think “we should have a white paper”

  2. Problem: I’ve never written a white paper, and he has but isn’t really clear on what the goals or audience are. I decide this is worth an hour to get right.

  3. 12:45 Follow my own advice to come up with questions to investigate.

  4. Step 1 is make a mind map of questions. Discover Whimsical’s free plan only allows four creations, ever, even if you delete old work.

    1. Go on FB to complain to friend who recommended it to me

    2. 12:57. Try to create in Roam instead
      Start page currently looks like:

  5. 12:58 Google “white papers 101”

  6. 12:58: first hit . Seems…uncomprehensive? Not high level enough? Extremely vague to the point of uselessness?

  7. 12:59 fight with NYTimes paywall . Article itself is useless but links to some promising pages.

    1. 1:01 . It’s basically a link to

      1. 1:02 no longer has the PDF referenced above

    2. 1:02

      1. Not useless. Begin notes on the top level Roam page, because I expect to only extract a few things from it.

      2. White papers can be aimed at a variety of audiences. Know which one you want and tailor it to them.

        1. 1:06 He doesn’t say it explicitly, but presumably a white paper is aimed at people you want things from. I think about who [partner] and I want things from

          1. This is kind of a dumb revelation. If white papers weren’t aimed at who we wanted things from, we should write a different kind of thing aimed at people we wanted things from.

          2. Rest of page is kind of vapid but I’m feeling pretty inspired.







  • 1:10 Remember to tag parent Roam page as a question. I would show you what Roam page looks like now, but apparently I forgot to take a picture.

  • Take that prompt and think a bit, till I come up with an algorithm I’m happy with


Knowledge Bootstrapping Steps v0.1


I wanted to have this published weeks ago, but the process of writing it highlighted some gaps, and the experiments to fill in those gaps highlighted more gaps, and… it’s become clear there’s always going to be one more experiment to run, and if I’m going to publish at all, it will be with gaps.  Furthermore, publishing may elicit feedback that alleviates the need to run some of the experiments, or suggests better ones. So with shaking fingers I am going to publish an incomplete instruction set, and hope it does more good than harm.

The goal of this system is to turn questions into answers. It does this by collecting questions and assembling data that relate to them until you can come to a confident answer, or at least an answer with a confidence interval.  This system has worked amazingly well for me, but I am one person. I am very sure that 

  1. It will need to be tweaked for other people 
  2. There are important steps I’m doing without noticing and thus didn’t know to write down. 
  3. This isn’t its final form for me either.

One of my goals in publishing is to get data from other people on how it doesn’t work for them, so I can universalize it. To that end:

  1. You can comment here
  2. You can email me at
  3. You can talk to me for an hour
  4. I am offering a limited amount of coaching on using the method. If you’re interested in this, please reach out at that same e-mail address.

The following system was designed to be run on Roam, which when I started was free and freely available. There is now a subscription fee, and off and on a waitlist. For me it’s worth it (well, will be, when they get around to charging legacy accounts), but I don’t know where the line of “worth it” falls for other people. I’ve got a list of untested alternatives at the bottom of this post, but the instructions will assume you’re working in Roam.


The Basics

The basic gameplay loop of this system is:

  1. Create a question
  2. Read sources to gather evidence bearing on question, and save it in notes
  3. Assemble evidence new and old into an answer (synthesis).

That looks pretty simple, but raises a number of questions:

  1. How do you create a good question?
  2. How do you choose what sources to read?
  3. How do you know what to write down?
  4. How do you synthesize the information into an answer?

I’ve linked to the questions where I have stabs at an answer; the remainder is still an intuition-based art to me.


The Details

Those steps are also pretty vague. Here’s the detailed version:

  1. What do you want to know? Try to be as specific as possible, but no simpler (i.e. leave room to be surprised). Create a Question page for this question (example, template, justification). 
  2. Break that question into smaller sub-questions, and continue until it feels indivisible at your current level of knowledge. Create links on the main question to all of your sub questions (example).
  3. Answer any questions you are confident you already know the answer to
  4. Pick the question that feels most live/interesting to you
  5. Write down everything you already know about the question, including things that don’t seem relevant but keep popping into your brain.
  6. Find sources for it (“How do I do that, Elizabeth?” “Still figuring out how to make that knowledge explicit, reader, but take a look here”).
  7. Pick the most promising looking source
    1. Create a Source page for it (template, example). Fill in metadata, including (in order of declining importance):
      1. URL: If you need to look something up about the work, you want to make sure you’re referring to the same version every time.
      2. Pagination: the type of pagination system you’re using (Kindle location and PDF page number are my most popular) and what it’s total count is in the version of the work you’re using. This is very useful when there are multiple versions or editions of a work and you want to make
      3. Preparatory brain dump: write down everything you already know about this work, or the topic in general (if you didn’t get it out of your head already). 
        1. Things this might include:
          1. Feelings on the author
          2. Why I chose to read this particular work
          3. Questions I have about the book already.
        2. This is important to me for reducing drag while reading: Everything I write in this spot would otherwise be taking up mental RAM.
      4. Subject: Tags for the subjects of the work (e.g., history, biology, economics, the Great Depression, covid). You can have as many tags as you want, and more is generally better, limited by your patience.
      5. Author: This seems really important for collating work from specific authors, but to be honest it’s never come up for me.
      6. Table of contents (long form only): Writing this down (or more likely, copy/pasting it and then correcting the formating) is helpful to me in orienting to topics and structure of the book. Theoretically I could achieve the same thing by reading the table of contents and paying appropriate attention, but in practice I don’t.
      7. Year published
    2. The last section on the form is Notes. 
      1. It will look something like this:
        1. Notes
        2. Introduction Pre-read
        3. Introduction
        4. Introduction summary
        5. Chapter 1 Pre-read
        6. Chapter 1
        7. Chapter 1 summary
      2. A “pre-read” is reading a few paragraphs from the beginning or ending of a chapter, and summarizing what they tell you about the chapter (it’s topic, approach, criticisms the author felt the need to preemptively defend against…). This will make it easier to determine which of the chapter’s claims are important.
      3. The actual chapter content looks like the following:
        1. Chapter 1
          1. Claim: Columbus set sail from Europe in 1492
            1. Pg 1
          2. Claim: Columbus’s expedition was financed by Queen Isabel of Spain
            1. Pg 2
        2. Not sure what to write down? You probably need to refine your question.
        3. Write down something relevant to one of your questions? Create a block reference to it under the Question’s evidence bullet, or copy/paste it if you’re not using Roam
      4. The chapter summary is what you think it is. I typically have to refer back to my notes to create the summary, but sometimes realize in the process of summarizing that I’ve left out something key and need to add it back in.
      5. Rinse and repeat.
      6. Did you record a claim that seems particularly interesting, important to the author’s thesis, or suspicious? Time to recurse. Create a Question for which that claim could be evidence (e.g. “Who financed Columbus’s expedition?”). Repeat this process with that question, from the beginning.
  8. Assemble claims from multiple sources under questions until you feel confident in an answer. 
  9. Change title of page to Synthesis: My Conclusion (example).
  10. Work your way up until you’re at your top question.
  11. You should now have answers to all your sub questions, allowing you to answer it, or failing that be confident in your uncertainty.
    1. If a work is only ever going to be referenced by one question, you don’t need to create a separate Source page for it, just give it a bullet under evidence (example).



The following are oft-mentioned competitors to Roam, although to the best of my knowledge none of them have block embedding that I consider crucial:

  1. Google Docs
  2. Obsidian (locally stored)
  3. Workflowy (lacks bidirectional links and cross-list embedding)
  4. Notion (I’ve used this for large projects and hate it)


EDIT 2020-07-20: Added some small changes based on feedback, and remembered to give credit to the Long Term Future Fund for funding this research.

Breaking Questions Down

Previously I talked about discovering that my basic unit of inquiry should be questions, not books. But what I didn’t talk about was how to generate those questions, and how to separate good questions from bad. That’s because I don’t know yet; my own process is mysterious and implicit to me. But I can give a few examples.

For any given question, your goal is to disambiguate it into smaller questions that, if an oracle gave you the answers to all of them, would allow you to answer the original question. Best case scenario, you repeat this process and hit bedrock, an empirical question for which you can find accurate data. You feed that answer into the parent question, and eventually it bubbles up to answering your original question.

That does not always happen. Sometimes the question is one of values, not facts. Sometimes sufficient accurate information is not available, and you’re forced to use a range- an uncertainty that will bubble up through parent answers. But just having the questions will clarify your thoughts and allow you to move more of your attention to the most important things.

Here are a few examples.  First, a reconstructed mind map of my process that led to several covid+economics posts. In the interests of being as informative as possible, this one is kind of stylized and uses developments I didn’t have at the time I actually did the research.

Vague covid panic@2x.png

If you’re curious about the results of this, the regular recession post is here and the oil crisis post is here.

Second, a map I created but have not yet researched, on the cost/benefit profile of a dental cleaning while covid is present.

Risk model of dental cleanings in particular@2x.png

Aside: Do people prefer the horizontal or vertical displays? Vertical would be my preference, but Whimsical does weird things with spacing so the tree ends up with a huge width either way.

Honestly this post isn’t really done; I have a lot more to figure out when it comes to how to create good questions. But I wanted to have something out before I published v0.1 of my Grand List of Steps, so here we are.

Many thanks to Rosie Campbell for inspiration and discussion on this idea.

How to Find Sources in an Unreliable World

I spent a long time stalling on this post because I was framing the problem as “how to choose a book (or paper. Whatever)?”. The point of my project is to be able to get to correct models even from bad starting places, and part of the reason for that goal is that assessing a work often requires the same skills/knowledge you were hoping to get from said work. You can’t identify a good book in a field until you’ve read several. But improving your starting place does save time, so I should talk about how to choose a starting place.

One difficulty is that this process is heavily adversarial. A lot of people want you to believe a particular thing, and a larger set don’t care what you believe as long as you find your truth via their amazon affiliate link (full disclosure: I use amazon affiliate links on this blog). The latter group fills me with anger and sadness; at least the people trying to convert you believe in something (maybe even the thing they’re trying to convince you of). The link farmers are just polluting the commons.

With those difficulties in mind, here are some heuristics for finding good starting places.

  • Search “best book TOPIC” on google
    • Most of what you find will be useless listicles. If you want to save time, ignore everything on a dedicated recommendation site that isn’t five books.
    • If you want to evaluate a list, look for a list author with deep models on both the problem they are trying to address, and why each book in particular helps educate on that problem.  Examples:
    • A bad list will typically have a topic rather than a question they are trying to answer, and will talk about why books they recommend are generically good, rather than how they address a particular issue. Quoting consumer reviews is an extremely bad sign and I’ve never seen it done without being content farming.
  • Search for your topic on Google Scholar
    • Look at highly cited papers. Even if they’re wrong, they’re probably important for understanding what else you read.
    • Look at what they cite or are cited by
    • Especially keep an eye out for review articles
  • Search for web forums on your topic (easy mode: just check reddit). Sometimes these will have intro guides with recommendations, sometimes they will have where-to-start posts, and sometimes you can ask them directly for recommendations. Examples:
  • Search Amazon for books on your topic. Check related books as well.
  • Ask your followers on social media. Better, announce what you are going to read and wait for people to tell you why you are wrong (appreciate it, Ian). Admittedly there’s a lot of prep work that goes into having friends/a following that makes this work, but it has a lot of other benefits so if it sounds fun to you I do recommend it. Example:
  • Ask an expert. If you already know an expert, great. If you don’t, this won’t necessarily save you any time, because you have to search for and assess the quality of the expert.
  • Follow interesting people on social media and squirrel away their recommendations as they make them, whether they’re relevant to your current projects or not.

Emotional Blocks as Obstacles to Learning

My goal was to come up with a system for reading a book. I eventually identified that as the wrong goal, but came up with a pretty great system for doing the much better goal of “how do I answer a question?” But developing that was not the hardest or most time consuming part of my research over the last 3 months (plus additional time working on covid). I feel weird talking about it, but the truth is, a lot of that time was spent overcoming emotional issues around learning. 

For example, I think I’ve discussed before (but could not find a link on) how I kind of have two modes when reading: too credulous, looking for reasons a work could be true, and too antagonistic, looking for reasons to not only disagree, but dismiss entirely. 

I introspected on this, and eventually figured out that at a deep level I felt I needed to believe books, that I was being bad if I disagreed with them. So of course I developed tools to prove my disagreements, which led to the bifurcation- either I was giving in to the original impulse or its counter, without the option of responsiveness.

This same block on challenging authority was behind my urge to start from a book rather than a question. I not only believed I needed to trust in an authority (as deemed by the publishing-industrial-complex) to give me answers, I needed to let them set the questions. 

A natural question here is “why are you so sure this emotional work helped this specific task?”  My evidence is how the needs-to-be-retitled-epistemic-spot-check project has evolved- I started out having books thrown at me and reading them with the goal of forming a yes-no verdict. I’ve now progressed to starting with a question (such as “What can the 1973  Oil Crisis tell us about supply shocks?”) that serves a specific purpose and finding work that advances it. In that post I derived a model from disparate pieces of information I sought out to answer specific questions. No books, no teachers, just me. I also have pretty extensive notes of the work I was doing and how it tracked to specific improvements, although they’re intensely personal and I will not be sharing them.

It’s not totally solved yet. I really wanted to read a book on the oil crisis, for exactly the reasons they’re a bad solution to the problem. I wanted someone to give me the answer. But I can at least see the desire for what it is, recognize that it’s not a desire to learn, and react appropriately.

Another natural question is “why does this happen?” There’s two answers to that- why did I specifically form that belief, and why did those circumstances have the power to make me form that belief. I have some guesses for the former; my failed state middle school where I was dependent on the goodwill of teachers for my physical safety is a top contender, although the “best” schools arguably use a more subtle stick to inculcate this attitude even more strongly. 

For the latter, I have a very rough theory, dependent on the types of knowledge I described here. Very crudely, I believe trauma instills scientific-type knowledge that is factually false but locally adaptive. False beliefs need more protection to be maintained than true beliefs, so the belief both calcifies, making it unresponsive to new information, and lays a bunch of emotional landmines around itself to punish you for getting too close to it. This cascades into punishing you for learning at all, because you might learn something that corrects your false-but-useful model.

How did I escape these traps? I have some guesses for that, but I can only confidently identify the things I did immediately before breakthroughs. I’ve been building these skills for 10 years, so there’s a lot of background knowledge and skill feeding into that success I don’t have conscious access to. I think instructions for only the immediate predecessors of break throughs could be useless to outright harmful. So my next project is figuring out more about this process, and hopefully finding generalizable techniques for improvement. 

Part of finding techniques that work on people other than me is talking to other people. If you’re interested in ways of contributing to or just using the knowledge, here are some options:

  1. Already done this work? Tell me more. You can comment here, e-mail elizabeth@, or use this anonymous form.
  2. Want to hear ideas and try them out yourself, ideally reporting back to me? Sign up for this google group.
  3. Want to devote several months of your life to working with me on this intensely? A number of things would have to go right for that work out, but if they all did, I think the potential is enormous. Email me at elizabeth @ this-domain


EDIT 7/15: Greetings again, Hacker News readers. This piece is the penultimate post in a long saga. If it strikes a chord with you, I’d encourage you check it out from the beginning, and check in in a few days for climax and epilogue. You can also follow me via RSS, via email by clicking the Follow button at the bottom right of the page, or on Twitter.  Also while this post wasn’t on Patreon, many of mine are, and support is greatly appreciated. You can also Talk To Me For An Hour, although we’ll see how that stands up to the influx of new readers.


Types of Knowledge

This is a system for sorting types of knowledge. There are many like it, but this one is mine.

First, there is knowledge you could regurgitate on a test. In any sane world this wouldn’t be called knowledge, but the school system sure looks enthusiastic about it, so I had to mention it. Examples:

  • Reciting the symptoms of childbed fever on command 
  • Reciting Newton’s first law of motion
  • Reciting a list of medications’ scientific and brand names
  • Reciting historical growth rate of the stock market
  • Reciting that acceleration due to gravity on Earth is 9.807 m/s²


Second, there is engineering knowledge- something you can repeat and get reasonably consistent results. It also lets you hill climb to local improvements. Examples:

  • Knowing how to wash your hands to prevent childbed fever and doing so
  • Driving without crashing
  • Making bread from a memorized recipe.
  • What are the average benefits and side effects from this antidepressant?
  • Knowing how much a mask will limit covid’s spread
  • Investing in index funds
  • Knowing that if you shoot a cannon ball of a certain weight at a certain speed, it will go X far.
  • Knowing people are nicer to me when I say “please” and “thank you”


Third, there is scientific knowledge. This is knowledge that lets you generate predictions for how a new thing will work, or how an old thing will work in a new environment, without any empirical knowledge.


  • Understanding germ theory of disease so you can take procedures that prevent gangrene and apply them to childbed fever.
  • Knowing the science of baking so you can create novel edible creations on your first try.
  • Knowing enough about engines and batteries to invent hybrid cars.
  • Actually understanding why any of those antidepressants works, in a mechanistic way, such that you can predict who they will and won’t work for.
  • A model of how covid is spread through aerosols, and how that is affected by properties of covid and the environment.
  • Having a model of economic change that allows you to make money off the stock market in excess of its growth rate, or know when to pull out of stocks and into crypto.
  • A model of gravity that lets you shoot a rocket into orbit on the first try.
  • A deep understanding of why certain people’s “please”s and “thank you”s get better results than others.


Engineering knowledge is a lot cheaper to get and maintain than scientific knowledge, and most of the time it works out. Maybe I pay more than I needed to for a car repair; I’ll live (although for some people the difference is very significant). You need scientific knowledge to do new things, which either means you’re trying something genuinely new, or you’re trying to maintain an existing system in a new environment.

I don’t know if you’ve noticed, but our environment was changing pretty rapidly before a highly contagious, somewhat deadly virus was released on the entire world, and while that had made things simpler in certain ways (such as my daily wardrobe), it has ultimately made it harder to maintain existing systems. This requires scientific knowledge to fix; engineering won’t cut it.

And it requires a lot of scientific knowledge at that- far more than I have time to generate. I could trust other people’s answers, but credentials and authority have never looked more useless, and identifying people I trust on any given subject is almost as time consuming as generating the answers myself.  And I don’t know what to do about that.


What to write down when you’re reading to learn

One of the hardest questions I’ve had to answer as part of the project formerly known as epistemic spot checks is: “how do I know what to write down?”

This will be kind of meandering, so here’s the take home. 

For shallow research:

  • Determine/discover what you care about before you start reading.
  • Write down anything relevant to that care.

For deep research:

  • Write down anything you find interesting.
  • Write down anything important to the work’s key argument.
  • Write down anything that’s taking up mental RAM, whether it seems related or interesting or not. If you find you’re doing this a lot, consider you might have a secret goal you don’t know about.
  • The less 1:1 the correspondence between your notes and the author’s words the better. Copy/pasting requires little to no engagement, alternate theories for the explanations spread over an entire chapter require a lot.


Now back to our regularly scheduled blog post.

Writing down a thing you’ve read (/heard/etc) improves your memory and understanding, at the cost of disrupting the flow of reading. Having written a thing down makes that one thing easier to rediscover, at the cost of making every other thing you have or will ever write down a little harder to find. Oh, and doing the math on this tradeoff while you’re reading is both really costly and requires knowing the future. 

I would like to give you a simple checklist for determining when to save a piece of information. Unfortunately I never developed one. There are obvious things like “is this interesting to me (for any reason)?” and “is this key to the author’s argument?”, but those never got rid of the nagging feeling that I was losing information I might find useful someday, and specifically that I was doing shallow research (which implies taking the author’s word for things) and not deep (which implies making my own models). 

The single most helpful thing in figuring out what to write down was noticing when my reading was slowing down, which typically meant either there was a particular fact that needed to be moved from short to long term storage, or that I needed to think about something. Things in these categories need to be written down and thought about regardless of their actual importance, because their perceived importance is eating up resources, and 30 seconds writing something down to regain those resources is a good trade even if I never use that information again. If I have one piece of advice, it’s “learn to recognize the subtle drag of something requiring your attention.”

An obvious question is “how do I do that though?”. I’m a mediocre person to answer this question because I didn’t set out to learn the skill, I just noticed I was doing it. But for things in this general class, the best thing I have found to do is get yourself in a state where you are very certain you have no drag (by doing a total brain dump), do some research, and pay attention to when drag develops. 

But of course it’s much better if my sense of “this is important, record it” corresponds with what is actually important. The real question here is “Important to what?” When I was doing book-based reviews, the answer at best was “the book’s thesis”, which as previously discussed gives the author a huge amount of power to control the narrative. But this became almost trivial when I switched the frame to answering a specific set of questions. As long as I had a very clear goal in mind, my subconscious would do most of the work. 

This isn’t a total solution though, because of the vast swath of territory labeled “getting oriented with what I don’t know”. For example right now I want to ask some specific questions about the Great Depression and what it can tell us about the upcoming economic crisis, but I don’t feel I know enough. It is very hard to get oriented with patchwork papers: you typically need books with cohesive narratives, and then to find other ways to undo the authors’ framing. Like a lot of things, this is solved by going meta. “I want to learn enough about the Great Depression that I have a framework to ask questions about parallels to the current crisis” was enough to let me evaluate different “Top Books about the Great Depression” lists and identify the one whose author was most in line with my goals (it was the one on fivebooks, which seems to be the case much more often than chance).

I mentioned “losing flow” as a cost of note taking in my opening, but I’m not actually convinced that’s a cost. Breaking flow also means breaking the author’s hold on you and thinking for yourself. I’ve noticed a pretty linear correlation between “how much does this break flow?” and “how much does this make me think for myself and draw novel conclusions?”. Copy/pasting an event that took place on a date doesn’t break flow but doesn’t inspire much thought. Writing down your questions about information that seems to be missing, or alternate interpretations of facts, takes a lot longer.

Which brings me to another point: for deep reading, copy pasting is almost always Doing It Wrong. Even simple paraphrasing requires more engagement than copy/pasting. Don’t cargo cult this though: there’s only so many ways to say simple facts, and grammar exercises don’t actually teach you anything about the subject.

So there is my very unsatisfying list of how to know what to write down when you’re reading to learn. I hope it helps.

Where to Start Research?

When I began what I called the knowledge bootstrapping project, my ultimate goal was “Learn how to learn a subject from scratch, without deference to credentialed authorities”. That was too large and unpredictable for a single grant, so when I applied to LTFF, my stated goal was “learn how to study a single book”, on the theory that books are the natural subcomponents of learning (discounting papers because they’re too small). This turned out to have a flawed assumption baked into it.

As will be described in a forthcoming post, the method I eventually landed upon involves starting with a question, not a book. If I start with a book and investigate the questions it brings up (you know, like I’ve been doing for the last 3-6 years), the book is controlling which questions get brought up. That’s a lot of power to give to something I have explicitly decided not to trust yet. 


  • When reading The Unbound Prometheus, I took the book’s word that a lower European birth rate would prove Europeans were more rational than Asians and focused on determining whether Europe’s birth rates were in fact lower (answer: it’s complicated), when on reflection it’s not at all clear to me that lower birth rates are evidence of rationality.
  • “Do humans have exactly 4 hours of work per day in them?” is not actually a very useful question. What I really wanted to know is “when can I stop beating myself up for not working?“, and the answer to the former doesn’t really help me with the latter. Even if humans on average have 4 hours, that doesn’t mean I do, and of course it varies by circumstances and type of work… and even “when can I stop beating myself up?” has some pretty problematic assumptions built into it, such as “beating myself up will produce more work, which is good.” The real question is something like “how can I approach my day to get the most out of it?”, and the research I did on verifying a paper on average daily work capacity didn’t inform the real question one way or the other.


What would have been better is if I’d started with the actual question I wanted to answer, and then looked for books that had information bearing on that question (including indirectly, including very indirectly). This is what I’ve started doing.

This can look very different depending on what type of research I’m doing. When I started doing covid research, I generated a long list of  fairly shallow questions.  Most of these questions were designed to inform specific choices, like “when should I wear what kind of mask?” and “how paranoid should I be about people without current symptoms?”, but some of them were broader and designed to inform multiple more specific questions, such as “what is the basic science of coronavirus?”. These broader, more basic questions helped me judge the information I used to inform the more specific, actionable questions (e.g., I saw a claim that covid lasted forever in your body the same way HIV does, which I could immediately dismiss because I knew HIV inserted itself your DNA and coronaviruses never enter the nucleus).



I used to read a lot of nonfiction for leisure. Then I started doing epistemic spot checks– taking selected claims from a book and investigating them for truth value, to assess the book’s overall credibility- and stopped being able to read nonfiction without doing that, unless it was one of a very short list of authors who’d made it onto my trust list. I couldn’t take the risk that I was reading something false and would absorb it as if it were true (or true but unrepresentative, and absorb it as representative). My time spent reading nonfiction went way down.

About 9 months ago I started taking really rigorous notes when I read nonfiction. The gap in quality of learning between rigorous notes and my previous mediocre notes was about the same as the gap between doing an epistemic spot check and not. My time spent reading nonfiction went way up (in part because I was studying the process of doing so), but my volume of words read dropped precipitously.

And then three months ago I shifted from my unit of inquiry being “a book”, to being “a question”. I’m sure you can guess where this is going- I read fewer words, but gained more understanding per word, and especially more core (as opposed to shell or test) understanding. 

The first two shifts happened naturally, and while I missed reading nonfiction for fun and with less effort, I didn’t feel any pull towards the old way after I discovered the new way. Giving up book-centered reading has been hard. Especially after five weeks of frantic covid research, all I wanted to do was to be sat down and told what questions were important, and perhaps be walked through some plausible answers. I labeled this a desire to learn, but when I compared it to question-centered research, it became clear that’s not what it was. Or maybe it was a desire to go through the act of learning something, but it was not a desire to answer a question I had and was not prioritized by the importance of a question. It was best classified as leisure in the form of learning, not resolving a curiosity I had.  And if I wanted leisure, better to consume something easier and less likely to lead me astray, so I started reading more fiction, and the rare non-fiction of a type that did not risk polluting my pool of data. And honestly I’m not sure that’s so safe: humans are built to extract lessons from fiction too.

Put another way: I goal factored (figured out what I actually wanted from) reading a nonfiction book, and the goal was almost never best served by using a nonfiction book as a starting point. Investigating a question I cared about was almost always better for learning (even if it did eventually cash out in reading a book), and fiction was almost always better for leisure, in part because it was less tiring, and thus left more energy for question-centered learning when that was what I wanted.


Turns Out Interruptions Are Bad, Who Knew?

I’ve been known to accuse people who say open offices are “fine with a few mitigations” of not paying attention to the cost of their mitigations. I believed they shrunk their thoughts down to the point that not much was lost from an interruption, at the cost of only being able to think the thoughts that fit in that interval. Any thought that would take too long to process could not be conceived of.

I’ve also been known to accuse people who advocate for deep, uninterrupted work without the distractions of social media of “not understanding how valuable social media is to me”. And  besides, my workflow works best with frequent breaks (that I choose the timing of) because I “background process”.


I maintained this illusion until, inspired by a stupidly expensive device that only does one thing, I taped my old phone to a bluetooth keyboard* and began to write in offline mode. It was immediately a magical experience. It was so *quiet*. I could go on my porch and write and it was quiet. My thoughts got much larger because I wasn’t subconsciously afraid I’d interrupt them. I began to feel angry at my laptop. Why did it insist on hurting me so much? Why couldn’t it be pure like the offline phone/keyboard experience? Why couldn’t I just create things?

[* I only found two bluetooth keyboards with an inlay for phones/tablets. The other one lacks a built in battery, and shipped with a broken key]

Locally, this lasted for about 10 minutes before the social media cravings kicked in. But that was enough. I deeply resented work for taking me away from my magic writing device and making everything so noisy

Since I started, my desire for using the quiet device has waxed and waned. At first I thought this was reflective of some deep pathology, but after two weeks it looks a lot more like “sometimes the benefits of quiet outweighs the benefits of being able to look stuff up, sometimes they don’t’”.  I’ve also changed how I interact on a connected device- I’m more likely to close Signal, less likely to open Twitter. This is less due to a utilitarian calculation of the costs and benefits of Twitter, and more that once I’m in a good state, I can notice how switching to Twitter is almost physically painful.

The problem is that I wasn’t wrong that social media was genuinely very valuable to me, and that was before we were all locked inside. But I definitely was wrong that getting those benefits were costless, in a way very analogous to mistakes I accused others of making. I’m glad I have the information now, but I haven’t figured out what to do with it yet.