Predictions As A Substitute For Reviews

Introduction

There is a lot of praise among my friends for weekly reviews. There is noticeably less doing of weekly reviews, even among the people doing the praising. It’s a very hard habit to keep up. We could spend a lot of time diving into why that is and how to fix it in a systematic way… or I could tell you how I use PredictionBook to get many of the promised benefits of weekly reviews without any willpower.

In a nutshell, when I find myself making a choice about how to spend time or money that’s dependent on some expectation, I write out the expectation as a prediction (with % likelihood) in PredictionBook, which then automatically prompts me to evaluate the prediction at a date I set. This has so many benefits I’m struggling to figure out where to start. If I had to sum them up in a phrase, it would be “more contact with reality”. But to expand on that…

What contact with reality may look like

Benefits

Making the prediction forces me to assess my anticipated outcomes and do an expected value calculation. This in turn forces me to be explicit about what I value, and how I will know if I got it. It also makes it explicit when I’m doing something because I expect the median or modal outcome to be good, vs. when I’m doing it for the long tail of unlikely but super good outcomes. It also gives me a chance to say “huh, that EV is not competitive” and do something else.

Evaluating the prompt gives me more information on how my plans are working out. Andy Matuschak talks about the dilemma of knowledge work, where you can’t make a plan for success but leaving things open ended often leads nowhere. His solution to this is to give himself unstructured time, but then look back and see if it worked out the way he wanted, and if it didn’t, do something different next time. Predictions provide a really natural, lightweight prompt for this reflection.

Sometimes I withdraw a question for being poorly phrased- often when my answer makes the decision sound “bad” but I feel like it was actually “good”, or vice versa. This is an easy way to notice I was wrong about what I valued and look for what is actually driving my decisions.

Then there are the framing benefits. It’s easier to view something as an experiment if you’re explicitly writing down “5% chance of success”, which makes failure feel less bad. I didn’t fail to make something work, I executed the correct algorithm and it failed to produce results this particular time.

Making an explicit prediction and writing it down closes open loops in my head, freeing RAM for other things.

It’s helpful to notice when the habit doesn’t trigger. For example, I have a meeting planned today (when I’m writing this) that I just “didn’t feel inspired” to make a prediction for. When I looked closer I realized that’s because I was kind of eh on this meeting but felt obliged to take it, and was avoiding that fact. I don’t know how this is going to work out because I’m choosing to work on this blog post before tackling that, but I still credit the prediction habit with noticing, which is the first step towards better choices.

Also it seems like this might make my predictions more accurate, which has all kinds of applications. But to be honest that’s not what’s reinforcing this.

Examples

So we’re on the same page, here are some sample predictions, and the decisions that rested on them:

  • Reading Pavlov and his School will lead to behavior change in the area of learning or sleep (20%)

    • Should I buy a physical copy of Pavlov and his School and spend the time to read it?

  • Talking with my friend Jane on 8/1 will be energizing (95%)

    • Should I take a call with Jane?

  • Bob the recruiter will describe a job that I am at all capable of and interested in doing (3%)

    • Should I take a call with Bob?

  • I will judge the seminar on 7/28 will be worth the interruption of flow (40%)

    • Should I spend money and time on this seminar?

  • Sam will finish task Y by 7/31 (20%)

    • No immediate decision riding on this one, but it sure seems useful to calibrate on how well I can predict a project partner’s productivity.

  • I will play with an Oculus Quest at least an hour a week (98%)

    • Should I buy a Quest?

  • California will catch fire to the point I need to keep my windows closed (95%)

    • Should I buy an air conditioner?

  • Company Z will offer me at least an hour of work (15%)

    • No immediate decision

  • Supplement S will improve my sleep to the point I don’t need a nap (3%)

    • Should I buy and use supplement S?

Instructions

If I were starting from scratch, here are the instructions I would want to receive (many of which I did receive, from Raemon)

  1. Create an account on PredictionBook.com

  2. Go to Settings to correct your time zone and set your prediction default to “visible to creator” rather than “visible to public” (unless you’d like them to be public by default).

  3. Create a handful of predictions that will resolve over the next week (PredictionBook will e-mail you when they should be resolved).

    1. The goal is twofold:

      1. Quickly get feedback about how this feels for you.

      2. Get yourself in the habit of making and resolving predictions.

    2. When making predictions, try to hone in on things that are decision-relevant to you; that you would do something different based on if the prediction was true or not.

  4. Create a link to https://predictionbook.com/predictions/new in your browser so you can access it easily.

  5. As you take more actions based on predictions (implicit or explicit), notice that you are doing so, and register the predictions in PredictionBook

  6. Resolve predictions as you are prompted to do so.

  7. After a week or two, check in with yourself and how you feel about the project. For calibration: I found creating the initial predictions kind of a pain but by a week in the project was naturally rewarding and required no will power on my part.

This should take < 30 minutes, and it should take < 45 minutes total to get a good sense of whether this is working for you.

Why Does This Work?

I’ve been thinking a lot about flow and distraction recently. One thing I’ve noticed is that it takes an awful long time, measured in hours, to get into the mindset for certain tasks. Those tasks then feel amazing, unless I’m pulled out of them prematurely, which hurts (and makes it take even longer to get into the next time). There’s a lot of implications of this that I’ll hopefully get to in some other post; the relevance here is that I think weekly reviews might be one of those tasks that requires a lot of time to get into the required headspace and hurts to leave prematurely. This makes them costly- much more costly then I would have estimated before I learned to bill tasks for their prep time. It’s also something of an all or nothing task- doing 50% of your weekly review does not get you nearly 50% of the benefits of a full weekly review.

But predictions, both making and evaluating, integrate into my life pretty naturally most of the time, and scale gracefully. Writing them up can take as little as 15 seconds, and when it takes longer, it’s because I’ve discovered something important I need to work out. Doing this fits in naturally with the process of making plans, so I don’t need to spend time getting into the right head space. Evaluating them is usually trivial- and if it’s not, it’s highlighting a problem with my models.

Tips and Tricks

“I will enjoy X” is very rarely the right prediction for me to make, because enjoyment is a tricky thing for me. I don’t like admitting I didn’t enjoy something, and sometimes I will burn a lot of energy trying to make something enjoyable, which can be a bad decision even if it works. My equivalent is “X will be energizing”

When trying to create predictions, think about what you’ll be spending money and time on in the next week. How do you think those things are going to go? What outcomes would make you change your decision?

Caveats

My original inspiration for this experiment, Raemon, hasn’t gotten the benefits I describe. He’s more focused on improving his prediction calibration and accuracy, and so far hasn’t done as much decision-relevant predictions. I’ve encouraged him to try it my way and hopefully he’ll comment after he’s spent a few weeks on that.

PredictionBook is pleasantly lightweight but it’s already a little cluttered with predictions. If I really do solidify this habit I may need to find or create a better system.

I’ve been doing this for about a month. People have kept up weekly reviews for longer before letting them fall by the wayside. I’m writing this up now because the act of solidifying a habit makes me worse at writing about it. I’ve set a reminder for 12/1 to write an update.

Thanks to Raemon for pushing me to a better version this post and for initiating the experiment in the first place.