Luck based medicine: inositol

Summary: Do you have weird digestive symptoms and anxiety or depression? Consider trying inositol (affiliate link), especially if the symptoms started after antibiotics.

Epistemic status: I did some research on this 10 years ago and didn’t write it down. In the last nine months I recommended it to a few people who (probably) really benefited from it. My track record on this kind of suggestion is mixed; the Apollo Neuro was mostly a dud but iron testing caught a lot of issues. 

Background

Inositol is a form of sugar. It’s used in messaging between cells in your body, which means it could in theory do basically anything. In practice, supplementation has been found maybe-useful in many metabolic and psychiatric issues, although far from conclusively. 

There are a few sources of inositol: it’s in some foods, especially fruit. Your body naturally manufactures some. And some gut bacteria produce it. If your gut bacteria are disrupted, you may experience a sudden drop in available inositol, which can lead to a variety of symptoms including anxiety and depression.

Anecdata

Inositol deficiency (probably) hit me hard 9 years ago, when I went on a multi-month course of some very hardcore antibiotics to clear out a suspected SIBO infection.

Some background: My resistance to Seasonal Affective Disorder has been thoroughly tested and found triumphant.  At the time I took those antibiotics I lived in Seattle, which gets 70 sunny days per year, concentrated in the summer. This was a step up from my hometown, which got 60 sunny days per year. I briefly experimented with sunshine in college, where I saw 155 sunny days per year, a full 75% of the US average. The overcast skies never bothered me, and I actively liked Seattle’s rain. So when I say I do not get Seasonal Affective Disorder or light-sensitive depression, I want you to understand my full meaning. Darkness has no power over me. 

That is, until I took those antibiotics. I was fine during the day, but as soon as sun set (which was ~5PM, it was Seattle in January) I experienced crushing despair. I don’t know if it was the worst depression in my life, or just the most obvious because it went from 0 to doom 15 minutes. 

Then I started taking inositol and the despair went away, even though I was on the antibiotics for at least another month. After the course finished I took some probiotics, weaned off the inositol, and was fine. 

About six months ago, my friend David MacIver mentioned a combination of mood and digestive issues, and I suggested inositol. It worked wonders.

He’s no longer quite so deliriously happy as described in the tweet, but still describes it as “everything feels easier”, and every time he lowers his dose things get worse. So seems likely this is a real and important effect

He’s also tried probiotics. It took several false starts, but after switching brands and taking them very consistently he was able to lower his dosage of inositol, and the effects of going off it are less dramatic (although still present).

He has a fairly large twitter following, so when he tweeted about inositol he inspired a fair number of people to try it. He estimates maybe 50 people tried it, and 2-5 reported big benefits. So ballpark 4-10% response rate (of people who read the tweet and thought it looked applicable). And most people respond noticeably to the first dose (not me, I think it took me a few days, but most people), so it’s very easy to test. 

A second friend also got very good results, although they have more problems and haven’t tested themselves as rigorously as David, so causality is more questionable. 

Fun fact: because inositol is a cheap, white, water soluble powder it’s used as a cutting agent for multiple street drugs. It’s also the go-to substitute for cocaine in movies. So if cocaine, heroin, or meth have weirdly positive effects on you, might be worth checking out.

Practicalities

Anything with a real effect can hurt you. Even that totally safe magic bracelet I recommended maybe gave someone nightmares. But as these things go, inositol is on the safer end to experiment with. The fact that it’s both a natural food product and produced endogenously gives you good odds, especially compared to cocaine. OTOH the fact that it has a lot of sources makes it harder to dose – after a few months David found that his initial dose was too high and induced brain fog, and he needed to lower it. 

I have a vague impression that quality isn’t an issue with inositol the way it is with some vitamins, so I just linked to the cheapest ones. 

In terms of dose: standard dosage is 0.5- 2g/day. David started at high end of that but is now down to 0.5-1g. I can’t remember what I did. If you try it, start small and experiment. 

If you do try it, I’d love if you filled out this form letting me know how it went.

Thanks to David Maciver for sharing his data. 

Epistemic Legibility

Tl;dr: being easy to argue with is a virtue, separate from being correct.

Introduction

Regular readers of my blog know of my epistemic spot check series, where I take claims (evidential or logical) from a work of nonfiction and check to see if they’re well supported. It’s not a total check of correctness: the goal is to rule out things that are obviously wrong/badly formed before investing much time in a work, and to build up my familiarity with its subject. 

Before I did epistemic spot checks, I defined an easy-to-read book as, roughly, imparting an understanding of its claims with as little work from me as possible. After epistemic spot checks, I started defining easy to read as “easy to epistemic spot check”. It should be as easy as possible (but no easier) to identify what claims are load-bearing to a work’s conclusions, and figure out how to check them. This is separate from correctness: things can be extremely legibly wrong. The difference is that when something is legibly wrong someone can tell you why, often quite simply. Illegible things just sit there at an unknown level of correctness, giving the audience no way to engage.

There will be more detailed examples later, but real quick: “The English GDP in 1700 was $890324890. I base this on $TECHNIQUE interpretation of tax records, as recorded in $REFERENCE” is very legible (although probably wrong, since I generated the number by banging on my keyboard). “Historically, England was rich” is not. “Historically, England was richer than France” is somewhere in-between. 

“It was easy to apply this blog post format I made up to this book” is not a good name, so I’ve taken to calling the collection of traits that make things easy to check “epistemic legibility”, in the James C. Scott sense of the word legible. Legible works are (comparatively) easy to understand, they require less external context, their explanations scale instead of needing to be tailored for each person. They’re easier to productively disagree with, easier to partially agree with instead of forcing a yes or no, and overall easier to integrate into your own models.

[Like everything in life, epistemic legibility is a spectrum, but I’ll talk about it mostly as a binary for readability’s sake]

When people talk about “legible” in the Scott sense they often mean it as a criticism, because pushing processes to be more legible cuts out illegible sources of value. One of the reasons I chose the term here is that I want to be very clear about the costs of legibility and the harms of demanding it in excess. But I also think epistemic legibility leads people to learn more correct things faster and is typically underprovided in discussion.

If I hear an epistemically legible argument, I have a lot of options. I can point out places I think the author missed data that impacts their conclusion, or made an illogical leap. I can notice when I know of evidence supporting their conclusions that they didn’t mention. I can see implications of their conclusions that they didn’t spell out. I can synthesize with other things I know, that the author didn’t include.

If I hear an illegible argument, I have very few options. Perhaps the best case scenario is that it unlocks something I already knew subconsciously but was unable to articulate, or needed permission to admit. This is a huge service! But if I disagree with the argument, or even just find it suspicious, my options are kind of crap. I write a response of equally low legibility, which is unlikely to improve understanding for anyone. Or I could write up a legible case for why I disagree, but that is much more work than responding to a legible original, and often more work than went into the argument I’m responding to, because it’s not obvious what I’m arguing against.  I need to argue against many more things to be considered comprehensive. If you believe Y because of X, I can debate X. If you believe Y because …:shrug:… I have to imagine every possible reason you could do so, counter all of them, and then still leave myself open to something I didn’t think of. Which is exhausting.

I could also ask questions, but the more legible an argument is, the easier it is to know what questions matter and the most productive way to ask them. 

I could walk away, and I am in fact much more likely to do that with an illegible argument. But that ends up creating a tax on legibility because it makes one easier to argue with, which is the opposite of what I want.

Not everything should be infinitely legible. But I do think more legibility would be good on most margins, that choices of the level of legibility should be made more deliberately, and that we should treat highly legible and illegible works more differently than we currently do. I’d also like a common understanding of legibility so that we can talk about its pluses and minuses, in general or for a particular piece.

This is pretty abstract and the details matter a lot, so I’d like to give some better examples of what I’m gesturing at. In order to reinforce the point that legibility and correctness are orthogonal; this will be a four quadrant model. 

True and Legible

Picking examples for this category was hard. No work is perfectly true and perfectly legible, in the sense of being absolutely impossible to draw an inaccurate conclusion from and having no possible improvements to legibility, because reality is very complicated and communication has space constraints. Every example I considered, I could see a reason someone might object to it. And the things that are great at legibility are often boring. But it needs an example so…

Acoup

Bret Devereaux over at Acoup consistently writes very interesting history essays that I found both easy to check and mostly true (although with some room for interpretation, and not everyone agrees). Additionally, a friend of mine who is into textiles tells me his textile posts were extremely accurate. So Devereaux does quite well on truth and legibility, despite bringing a fair amount of emotion and strong opinions to his work. 

As an example, here is a paragraph from a post arguing against descriptions of Sparta as a highly equal society.

But the final word on if we should consider the helots fully non-free is in their sanctity of person: they had none, at all, whatsoever. Every year, in autumn by ritual, the five Spartan magistrates known as the ephors (next week) declared war between Sparta and the helots – Sparta essentially declares war on part of itself – so that any spartiate might kill any helot without legal or religious repercussions (Plut. Lyc. 28.4; note also Hdt. 4.146.2). Isocrates – admittedly a decidedly anti-Spartan voice – notes that it was a religious, if not legal, infraction to kill slaves everywhere in Greece except Sparta (Isoc. 12.181). As a matter of Athenian law, killing a slave was still murder (the same is true in Roman law). One assumes these rules were often ignored by slave-holders of course – we know that many such laws in the American South were routinely flouted. Slavery is, after all, a brutal and inhuman institution by its very nature. The absence of any taboo – legal or religious – against the killing of helots marks the institution as uncommonly brutal not merely by Greek standards, but by world-historical standards.

Here we have some facts on the ground (Spartiates could kill their slaves, killing slaves was murder in most contemporaneous societies), sources for some but not all of them (those parentheticals are highly readable if you’re a classicist, and workable if you’re not), the inference he drew from them (Spartans treated their slaves unusually badly), and the conclusions he drew from that (Sparta was not only inequitable, it was unusually inequitable even for its time and place).

Notably, the entire post relies heavily on the belief that slavery is bad, which Devereaux does not bother to justify. That’s a good choice because it would be a complete waste of time for modern audiences – but it also makes this post completely unsuitable for arguing with anyone who disagreed. If for some reason you needed to debate the ethics of slavery, you need work that makes a legible case for that claim in particular, not work that takes it as an axiom.

Exercise for Mood and Anxiety

A few years ago I ESCed Exercise for Mood and Anxiety, a book that aims to educate people on how exercise can help their mental health and then give them the tools to do so. It did really well at the former: the logic was compelling and the foundational evidence was well cited and mostly true (although exercise science always has wide error bars). But out of 14 people who agreed to read the book and attempt to exercise more, only three reported back to me and none of them reported an increase in exercise. So EfMaA is true and epistemically legible, but nonetheless not very useful. 

True but Epistemically Illegible

You Have About Five Words is a poetic essay from Ray Arnold. The final ~paragraph is as follows:

If you want to coordinate thousands of people…

You have about five words.

This has ramifications on how complicated a coordinated effort you can attempt.

What if you need all that nuance and to coordinate thousands of people? What would it look like if the world was filled with complicated problems that required lots of people to solve?

I guess it’d look like this one.

I think the steelman of its core claim, that humans are bad at remembering long nuanced writing and the more people you are communicating with, the more you need to simplify your writing, is obviously true. This is good, because Ray isn’t doing crap to convince me of it. He cites no evidence and gives no explanation of his logic. If I thought nuance increased with the number of readers I would have nothing to say other than “no you’re wrong” or write my own post from scratch, because he gives no hooks to refute. If someone tried to argue that you get ten words rather than five, I would think they were missing the point. If I thought he had the direction right but got the magnitude of the effect wrong enough that it mattered (and he was a stranger rather than a friend), I would not know where to start the discussion.

[Ray gets a few cooperation points back by explicitly labeling this as poetry, which normally I would be extremely happy about, but it weakened its usefulness as an example for this post so right this second I’m annoyed about it.]

False but Epistemically Legible

Mindset

I think Carol Dweck’s Mindset and associated work is very wrong, and I can produce large volumes on specific points of disagreement. This is a sign of a work that is very epistemically legible: I know what her cruxes are, so I can say where I disagree. For all the shit I’ve talked about Carol Dweck over the years, I appreciate that she made it so extraordinarily easy to do so, because she was so clear on where her beliefs came from. 

For example, here’s a quote from Mindset

All children were told that they had performed well on this problem set: “Wow, you did very well on these problems. You got [number of problems] right. That’s a really high score!” No matter what their actual score, all children were told that they had solved at least 80% of the problems that they answered.

Some children were praised for their ability after the initial positive feedback: “You must be smart at these problems.” Some children were praised for their effort after the initial positive feedback: “You must have worked hard at these problems.” The remaining children were in the control condition and received no additional feedback.

And here’s Scott Alexander’s criticism

This is a nothing intervention, the tiniest ghost of an intervention. The experiment had previously involved all sorts of complicated directions and tasks, I get the impression they were in the lab for at least a half hour, and the experimental intervention is changing three short words in the middle of a sentence.

And what happened? The children in the intelligence praise condition were much more likely to say at the end of the experiment that they thought intelligence was more important than effort (p < 0.001) than the children in the effort condition. When given the choice, 67% of the effort-condition children chose to set challenging learning-oriented goals, compared to only 8% (!) of the intelligence-condition. After a further trial in which the children were rigged to fail, children in the effort condition were much more likely to attribute their failure to not trying hard enough, and those in the intelligence condition to not being smart enough (p < 0.001). Children in the intelligence condition were much less likely to persevere on a difficult task than children in the effort condition (3.2 vs. 4.5 minutes, p < 0.001), enjoyed the activity less (p < 0.001) and did worse on future non-impossible problem sets (p…you get the picture). This was repeated in a bunch of subsequent studies by the same team among white students, black students, Hispanic students…you probably still get the picture.

Scott could make those criticisms because Dweck described her experiment in detail. If she’d said “we encouraged some kids and discouraged others”, there would be a lot more ambiguity.

Meanwhile, I want to criticize her for lying to children. Messing up children’s feedback system creates the dependencies on adult authorities that lead to problems later in life. This is extremely bad even if it produces short-term improvements (which it doesn’t). But I can only do this with confidence because she specified the intervention.

The Fate of Rome

This one is more overconfident than false. The Fate of Rome laid out very clearly how they were using new tools for recovering meteorological data to determine the weather 2000 years ago, and using that to analyze the Roman empire. Using this new data, it concludes that the peak of Rome was at least partially caused by a prolonged period of unusually good farming weather in the Mediterranean, and that the collapse started or was worsened when the weather began to regress to the mean.

I looked into the archeometeorology techniques and determined that they, in my judgement, had wider confidence intervals than the book indicated, which undercut the causality claims. I wish the book had been more cautious with its evidence, but I really appreciate that they laid out their reasoning so clearly, which made it really easy to look up points I might disagree with them on.

False and Epistemically Illegible

Public Health and Airborne Pathogen Transmission

I don’t know exactly what the CDC’s or WHO’s current stance is on breathing-based transmission of covid, and I don’t care, because they were so wrong for so long in such illegible ways. 

When covid started, the CDC and WHO’s story was that it couldn’t be “airborne”, because the viral particle was > 5 microns.  That phrasing was already anti-legible for material aimed at the general public, because airborne has a noticeably different definition in virology (”can persist in the air indefinitely”) than it does for popular use (”I can catch this through breathing”). But worse than that, they never provided any justification for the claim. This was reasonable for posters, but not everything was so space constrained, and when I looked in February 2021 I could not figure out where the belief that airborne transmission was rare was coming from. Some researcher eventually spent dozens to hundreds of hours on this and determined the 5 micron number probably came from studies of tuberculosis, which for various reasons needs to get deeper in the lungs than most pathogens and thus has stronger size constraints. If the CDC had pointed to their sources from the start we could have determined the 5 micron limit was bullshit much more easily (the fact that many relevant people accepted it without that proof is a separate issue).

When I wrote up the Carol Dweck example, it was easy. I’m really confident in what Carol Dweck believed at the time of writing Mindset, so it’s really easy to describe why I disagree. Writing this section on the CDC was harder, because I cannot remember exactly what the CDC said and when they said it; a lot of the message lived in implications; their statements from early 2020 are now memory holed and while I’m sure I could find them on archive.org, it’s not really going to quiet the nagging fear that someone in the comments is going to pull up a different thing they said somewhere else that doesn’t say exactly what I claimed they said, or that I view as of a piece with what I cited but both statements are fuzzy enough that it would be a lot of work to explain why I think the differences are immaterial….

That fear and difficulty in describing someone’s beliefs is the hallmark of epistemic illegibility. The wider the confidence interval on what someone is claiming, the more work I have to do to question it.

And More…

The above was an unusually legible case of illegibility. Mostly illegible and false arguments don’t feel like that. They just feel frustrating and bad and like the other person is wrong but it’s too much work to demonstrate how. This is inconveniently similar to the feeling when the other person is right but you don’t want to admit it. I’m going to gesture some more at illegibility here, but it’s inherently an illegible concept so there will be genuinely legible (to someone) works that resemble these points, and illegible works that don’t.

Marks of probable illegibility:

  • The person counters every objection raised, but the counters aren’t logically consistent with each other. 
  • You can’t nail down exactly what the person actually believes. This doesn’t mean they’re uncertain – saying “I think this effect is somewhere between 0.1x and 10000x” is very legible, and sometimes the best you can do given the data. It’s more that they imply a narrow confidence band, but the value that band surrounds moves depending on the subargument. Or they agree they’re being vague but they move forward in the argument as if they were specific. 
  • You feel like you understand the argument and excitedly tell your friends. When they ask obvious questions you have no answer or explanation. 

A good example of illegibly bad arguments that are specifically trying to ape legibility are a certain subset of alt-medicine advertisements. They start out very specific, with things like “there are 9804538905 neurons in your brain carrying 38923098 neurotransmitters”, with rigorous citations demonstrating those numbers. Then they introduce their treatment in a way that very strongly implies it works with those 38923098 transmitters but not, like, what it does to them or why we would expect that to have a particular effect. Then they wrap it up with some vague claims about wellness, so you’re left with the feeling you’ll definitely feel better if you take their pill, but if you complain about any particular problem it did not fix they have plausible deniability.

[Unfortunately the FDA’s rules around labeling encourage this illegibility even for products that have good arguments and evidence for efficacy on specific problems, so the fact that a product does this isn’t conclusive evidence it’s useless.]

Bonus Example: Against The Grain

The concept of epistemic legibility was in large part inspired by my first attempt at James C. Scott’s Against the Grain (if that name seems familiar: Scott also coined “legibility” in the sense in which I am using it), whose thesis is that key properties of grains (as opposed to other domesticates) enabled early states. For complicated reasons I read more of AtG without epistemic checking than I usually would, and then checks were delayed indefinitely, and then covid hit, and then my freelancing business really took off… the point is, when I read Against the Grain in late 2019, it felt like it was going to be the easiest epistemic spot check I’d ever done. Scott was so cooperative in labeling his sources, claims, and logical conclusions. But when I finally sat down to check his work, I found serious illegibilities.

I did the spot check over Christmas this year (which required restarting the book). It was maybe 95% as good as I remembered, which is extremely high. At chapter 4 (which is halfway through the book, due to the preface and introduction), I felt kinda overloaded and started to spot check some claims (mostly factual – the logical ones all seemed to check out as I read them). A little resentfully, I checked this graph.

This should have been completely unnecessary, Scott is a decent writer and scientist who was not going to screw up basic dates. I even split the claims section of the draft into two sections, “Boring” and “Interesting”, because I obviously wasn’t going to come up with anything checking names and dates and I wanted that part to be easy to skip.

I worked from the bottom. At first, it was a little more useful than I expected – a major new interpretation of the data came out the same year the book was published, so Scott’s timing on anatomically modern humans was out of date, but not in a way that reflected poorly on him.

Finally I worked my way up to “first walled, territorial state”. Not thinking super hard, I googled “first walled city”, and got a date 3000 years before the one Scott cites. Not a big deal, he specified state, not walls. What I can google to find that out? “Earliest state”, obviously, and the first google hit does match Scott’s timing, but… what made something a state, and how can we assess those traits from archeological records? I checked, and nowhere in the preface, introduction, or first three chapters was “state” defined. No work can define every term it uses, but this is a pretty important one for a book whose full title is Against the Grain: A Deep History of the Earliest States

You might wonder if “state” had a widespread definition such that it didn’t need to be defined. I think this is not the case for a few reasons. First, Against The Grain is aimed at a mainstream audience, and that requires defining terms even if they’re commonly known by experts. Second, even if a reader knew the common definition of what made a state, how you determine whether something was a state or merely a city from archeology records is crucial for understanding the inner gears of the book’s thesis. Third, when Scott finally gives a definition, it’s not the same as the one on wikipedia.

[longer explanation] Among these characteristics, I propose to privilege those that point to territoriality and a specialized state apparatus: walls, tax collection, and officials.

Against the Grain

States are minimally defined by anthropologist David S. Sandeford as socially stratified and bureaucratically governed societies with at least four levels of settlement hierarchy (e.g., a large capital, cities, villages, and hamlets)

Wikipedia (as of 2021-12-26)

These aren’t incompatible, but they’re very far from isomorphic. I expect that even though there’s a fairly well accepted definition of state in the relevant field(s), there are disputed edges that matter very much for this exact discussion, in which Scott views himself as pushing back against the commonly accepted narrative. 

To be fair, the definition of state was not that relevant to chapters 1-3, which focus on pre-state farming. Unless, you know, your definition of “state” differs sufficiently from his. 

Against The Grain was indeed very legible in other ways, but loses basically all of its accrued legibility points and more for not making even a cursory definition of a crucial term in the introduction, and for doing an insufficient job halfway through the book.

This doesn’t mean the book is useless, but it does mean it was going to be more work to extract value from than I felt like putting in on this particular topic.

Why is this Important?

First of all, it’s costing me time.

I work really hard to believe true things and disbelieve false things, and people who argue illegibly make that harder, especially when people I respect treat arguments as more proven than their level of legibility allows them to be. I expect having a handle with which to say “no I don’t have a concise argument about why this work is wrong, and that’s a fact about the work” to be very useful.

More generally, I think there’s a range of acceptable legibility levels for a given goal, but we should react differently based on which legibility level the author chose, and that arguments will be more productive if everyone involved agrees on both the legibility level and on the proper response to a given legibility level. One rule I have is that it’s fine to declare something a butterfly idea and thus off limits to sharp criticism, but that inherently limits the calls to action you can make based on that idea. 

Eventually I hope people will develop some general consensus around the rights and responsibilities of a given level of legibility, and that this will make arguments easier and more productive. Establishing those rules is well beyond the scope of this post. 

Legibility vs Inferential Distance

You can’t explain everything to everyone all of the time. Some people are not going to have the background knowledge to understand a particular essay of yours. In cases like this, legibility is defined as “the reader walks away with the understanding that they didn’t understand your argument”. Illegibility in this case is when they erroneously think they understand your argument. In programming terms, it’s the difference between a failed function call returning a useful error message (legible), versus failing silently (illegible).  

A particularly dangerous way this can occur is when you’re using terms of art (meaning: words or phrases that have very specific meanings within a field) that are also common English words. You don’t want someone thinking you’re dismissing a medical miracle because you called it statistically insignificant, or invalidating the concept of thought work because it doesn’t apply force to move an object.

Cruelly, misunderstanding becomes more likely the more similar the technical definition is to the English definition. I watched a friend use the term “common knowledge” to mean “everyone knows that everyone knows, and everyone knows that everyone knows… and that metaknoweldge enables actions that wouldn’t be possible if it was merely true that everyone knew and thought they were the only one, and those additional possible actions are extremely relevant to our current conversation” to another friend who thought “common knowledge” meant “knowledge that is common”, and had I not intervened the ensuing conversation would have been useless at best.

Costs of Legibility

The obvious ones are time and mental effort, and those should not be discounted. Given a finite amount of time, legibility on one work trades off against another piece being produced at all, and that may be the wrong call.

A second is that legibility can make things really dry. Legibility often means precision, and precision is boring, especially relative to work optimized to be emotionally activating. 

Beyond that, legibility is not always desirable. For example, unilateral legibility in an adversarial environment makes you vulnerable, as you’re giving people the keys to the kingdom of “effective lies to tell you”. 

Lastly, premature epistemic legibility kills butterfly ideas, which are beautiful and precious and need to be defended until they can evolve combat skills.

How to be Legible

This could easily be multiple posts, I’m including a how-to section here more to help convey the concept of epistemic legibility than write a comprehensive guide to how to do it. The list is not a complete list, and items on it can be faked. I think a lot of legibility is downstream of something harder to describe. Nonetheless, here are a few ways to make yourself more legible, when that is your goal.

  • Make it clear what you actually believe.
    • Watch out for implicit quantitative estimates (“probably”, “a lot”, “not very much”) and make them explicit, even if you have a very wide confidence interval. The goals here are twofold: the first is to make your thought process explicit to you. The second is to avoid confusion – people can mean different things by “many”, and I’ve seen some very long arguments suddenly resolve when both sides gave actual numbers.
  • Make clear the evidence you are basing your beliefs on.
    • This need not mean “scientific fact” or “RCT”. It could be “I experienced this a bunch in my life” or “gut feeling” or “someone I really trust told me so”. Those are all valid reasons to believe things. You just need to label them.
  • Make that evidence easy to verify.
    • More accessible sources are better.
      • Try to avoid paywalls and $900 books with no digital versions.
      • If it’s a large work, use page numbers or timestamps to the specific claim, removing the burden to read an entire book to check your work (but if your claim rests on a large part of the work, better to say that than artificially constrict your evidence)
    • One difficulty is when the evidence is in a pattern, and no one has rigorously collated the data that would let you demonstrate it. You can gather the data yourself, but if it takes a lot of time it may not be worth it. 
    • In times past, when I wanted to refer to a belief I had in a blog post but didn’t have a citation for it, I would google the belief and link to the first article that came up. I regret this. Just because an article agrees with me doesn’t mean it’s good, or that its reasoning is my reasoning. So one, I might be passing on a bad argument. Two, I know that, so if someone discredits the linked article it doesn’t necessarily change my mind, or even create in me a feeling of obligation to investigate. I now view it as more honest to say “I believe this but only vaguely remember the reasons why”, and if it ends up being a point of contention I can hash it out later.
  • Make clear the logical steps between the evidence and your final conclusion.
  • Use examples. Like, so many more examples than you think. Almost everything could benefit from more examples, especially if you make it clear when they’re skippable so people who have grokked the concept can move on.
    • It’s helpful to make clear when an example is evidence vs when it’s a clarification of your beliefs. The difference is if you’d change your mind if the point was proven false: if yes, it’s evidence. If you’d say “okay fine, but there are a million other cases where the principle holds”, it’s an example.  One of the mistakes I made with early epistemic spot checks was putting too much emphasis on disproving examples that weren’t actually evidence.
  • Decide on an audience and tailor your vocabulary to them. 
    • All fields have words that mean something different in the field than in general conversation, like “work”, “airborne”, and “significant”. If you’re writing within the field, using those terms helps with legibility by conveying a specific idea very quickly. If you’re communicating outside the field, using such terms without definition hinders legibility, as laypeople misapply their general knowledge of the English language to your term of art and predictably get it wrong. You can help on the margins by defining the term in your text, but I consider some uses of this iffy.
      • The closer the technical definition of a term is to its common usage, the more likely this is to be a problem because it makes it much easier for the reader to think they understand your meaning when they don’t.
    • At first I wanted to yell at people who use terms of art in work aimed at the general population, but sometimes it’s unintentional, and sometimes it’s a domain expert who’s bad at public speaking and has been unexpectedly thrust onto a larger stage, and we could use more of the latter, so I don’t want to punish people too much here. But if you’re, say, a journalist who writes a general populace book but uses an academic term of art in a way that will predictably be misinterpreted, you have no such excuse and will go to legibility jail. 
    • A skill really good interviewers bring to the table is recognizing terms of art that are liable to confuse people and prompting domain experts to explain them.
  • Write things down, or at least write down your sources. I realize this is partially generational and Gen Z is more likely to find audio/video more accessible than written work, and accessibility is part of legibility. But if you’re relying on a large evidence base it’s very disruptive to include it in audio and very illegible to leave it out entirely, so write it down.
  • Follow all the rules of normal readability – grammar, paragraph breaks, no run-on sentences, etc.

A related but distinct skill is making your own thought process legible. John Wentworth describes that here.

Synthesis

“This isn’t very epistemically legible to me” is a valid description (when true), and a valid reason not to engage. It is not automatically a criticism.

“This idea is in its butterfly stage”, “I’m prioritizing other virtues” or “this wasn’t aimed at you” are all valid defenses against accusations of illegibility as a criticism (when true), but do not render the idea more legible.

“This call to action isn’t sufficiently epistemically legible to the people it’s aimed at” is an extremely valid criticism (when true), and we should be making it more often.

I apologize to Carol Dweck for 70% of the vigor of my criticism of her work; she deserves more credit than I gave her for making it so easy to do that. I still think she’s wrong, though.

Epilogue: Developing a Standard for Legibility

As mentioned above, I think the major value add from the concept of legibility is that it lets us talk about whether a given work is sufficiently legible for its goal. To do this, we need to have some common standards for how much legibility a given goal demands. My thoughts on this are much less developed and by definition common standards need to be developed by the community that holds them, not imposed by a random blogger, so I’ll save my ideas for a different post. 

Epilogue 2: Epistemic Cooperation

Epistemic legibility is part of a broader set of skills/traits I want to call epistemic cooperation. Unfortunately, legibility is the only one I have a really firm handle on right now (to the point I originally conflated the concepts, until a few conversations highlighted the distinction- thanks friends!). I think epistemic cooperation, in the sense of “makes it easy for us to work together to figure out the truth” is a useful concept in its own right, and hope to write more about it as I get additional handles. In the meantime, there are a few things I want to highlight as increasing or signalling cooperation in general but not legibility in particular:

  • Highlight ways your evidence is weak, related things you don’t believe, etc.
  • Volunteer biases you might have.
  • Provide reasons people might disagree with you.
  • Don’t emotionally charge an argument beyond what’s inherent in the topic, but don’t suppress emotion below what’s inherent in the topic either.
  • Don’t tie up brain space with data that doesn’t matter.

Thanks to Ray Arnold, John Salvatier, John Wentworth, and Matthew Graves for discussion on this post. 

Butterfly Ideas

Or “How I got my hyperanalytical friends to chill out and vibe on ideas for 5 minutes before testing them to destruction”

Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation. It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are.

You know who you are

When I’m stuck in a conversation like that, it has been really helpful to explicitly label things as butterfly ideas. This has two purposes. First, it’s a shorthand for labeling what I want (nurturance and encouragement). Second, it explicitly labels the idea as not ready for prime time in ways that make it less threatening to my friends. They can support the exploration of my idea without worrying that support of exploration conveys agreement, or agreement conveys a commitment to act.

This is important because very few ideas start out ready for the rigors of combat. If they’re not given a sheltered period, they will die before they become useful. This cuts us off from a lot of goodness in the world. Examples:

  • A start-up I used to work for had a keyword that meant “I have a vague worried feeling I want to discuss without justifying”. This let people bring up concerns before they had an ironclad case for them and made statements that could otherwise have felt like intense criticism feel more like information sharing (they’re not asserting this will definitely fail, they’re asserting they have a feeling that might lead to some questions). This in turn meant that problems got brought up and addressed earlier, including problems in the classes “this is definitely gonna fail and we need to make major changes” and  “this excellent idea but Bob is missing the information that would help him understand why”.
    • This keyword was “FUD (fear, uncertainty, doubt)”. It is used in exactly the opposite way in cryptocurrency circles, where it means “you are trying to increase our anxiety with unfounded concerns, and that’s bad”. Words are tricky.
  • Power Buys You Distance From The Crime started out as a much less defensible seed of an idea with a much worse explanation. I know that had I talked about it in public it would have caused a bunch of unproductive yelling that made it harder to think because I did and it did (but later, when it was ready, intellectual combat with John Wentworth improved the idea further).
  • The entire genre of “Here’s a cool new emotional tool I’m exploring”
  • The entire genre of “I’m having a feeling about a thing and I don’t know why yet”

I’ve been on the butterfly crushing end of this myself- I’m thinking of a particular case last year where my friend brought up an idea that, if true, would require costly action on my part. I started arguing with the idea, they snapped at me to stop ruining their dreams. I chilled out, we had a long discussion about their goals, how they interpreted some evidence, and why they thought a particular action might further said goals, etc. 

A week later all of my objections to the specific idea were substantiated and we agreed not to do the thing- but thanks to the conversation we had in the meantime, I have a better understanding of them and what kinds of things would be appealing to them in the future. That was really valuable to me and I wouldn’t have learned all that if I’d crushed the butterfly in the beginning.

Notably, checking out that idea was fairly expensive, and only worth it because this was an extremely close friend (which both made the knowledge of them more valuable, and increased the payoff to helping them if they’d been right). If they had been any less close, I would have said “good luck with that” and gone about my day, and that would have been a perfectly virtuous reaction. 

I almost never discuss butterfly ideas on the public internet, or even 1:many channels. Even when people don’t actively antagonize them, the environment of Facebook or even large group chats means that people often read with half their brain and respond to a simplified version of what I said. For a class of ideas that live and die by context and nuance and pre-verbal intuitions, this is crushing. So what I write in public ends up being on the very defensible end of the things I think. This is a little bit of a shame, because the returns to finding new friends to study your particular butterflies with is so high, but ce la vie. 

This can play out a few ways in practice. Sometimes someone will say “this is a butterfly idea” before they start talking. Sometimes when someone is being inappropriately aggressive towards an idea the other person will snap “will you please stop crushing my butterflies!” and the other will get it. Sometimes someone will overstep, read the other’s facial expression, and say “oh, that was a butterfly, wasn’t it?”. All of these are marked improvements over what came before, and have led to more productive discussions with less emotional pain on both sides.

A Quick Look At 20% Time

I was approached by a client to research the concept of 20% time for engineers, and they graciously agreed to let me share my results. Because this work was tailored to the needs of a specific client, it may have gaps or assumptions that make it a bad 101 post, but in the expectation that it is more useful than not publishing at all, I would like to share it (with client permission). 

Side project time, popularized as 20% time at Google, is a policy that allows employees to spend a set percentage of their time on a project of their choice, rather than one directed by management. In practice this can mean a lot of different things, ranging from “spend 20% of your time on whatever you want” to “sure, spend all the free time you want generating more IP for us, as long as your main project is completely unaffected” (often referred to as 120% time) to “theoretically you’re free to do whatever, but we’ve imposed so many restrictions that this means nothing”. I did a 4-hour survey to get a sense of what implementations were available and how they felt for workers.

A frustration here is that almost all of what I could find via Google searches were puff-pieces, anti-puff-pieces, and employees complaining on social media (and one academic article). The single best article I found came not through a Google search, but because I played D&D with the author 15 years ago and she saw me talking about this on Facebook. She can’t be the only one writing about 20% time in a thoughtful way and I’m mad that that writing has been crowded out by work that is, at best, repetitive, and at worst actively misleading.

There are enough anecdotal reports that I believe 20% time exists and is used to good effect by some employees at some companies (including Google) some of the time. The dearth of easily findable information on specific implementations, managerial approaches, trade-offs, etc, makes me downgrade my estimate of how often that happens, vs 20% time being a legible signal of an underlying attitude towards autonomy, or a dubious recruitment tool. I see a real market gap for someone to explain how to do 20% time well at companies of different sizes and product types.

But in the meantime, here’s the summary I gave my client. Reminder: this was originally intended for a high-context conversation with someone who was paying me by the hour, and as such is choppier, less nuanced, and has different emphases than ideal for a public blog post.  

My full notes are available here.

  • To the extent it’s measured, utilization appears to be low, so the policy doesn’t cost very much.
    • In 2015, a Google HR exec estimated utilization at 10% (meaning it took 2% of all employees’ time). 
    • In 2009, 12 months after Atlassian introduced 20% time, recorded utilization was at 5% (meaning employees were measured to spend 1.1% of their time on it) and estimated actual utilization was <=15% (Notably, nobody complains that Atlassian 20% is fake, and I confirmed with a recently departed employee that it was still around as of 2020).
  • Interaction with management and evaluation is key. A good compromise is to let people spend up to N hours on a project, and require a check-in with management beyond that. 
    • Googlers consistently (although not universally) complained on social media that even when 20% time was officially approved, you’d be a fool to use it if you wanted a promotion or raises. 
    • However a manager at a less famous company indicated this hadn’t been a problem for them, and that people who approached perf the way everyone does at Google would be doomed anyway. So it looks like you can get out of this with culture.
    • An approval process is the kiss of death for a feeling of autonomy, but letting employees work on garbage for 6 months and then holding it against them at review time hurts too. 
    • Atlassian requires no approval to start, 3 uninvolved colleagues to vouch for a project to go beyond 5 days, and founder approval at 10 days. This seems to be working okay for them (but see the “costs” section below).
  • Costs of 20% time:
    • Time cost appears to be quite low (<5% of employee time, some of which couldn’t have been spent on core work anyway)
    • Morale effects can backfire: Sometimes devs make tools or projects that are genuinely useful, but not useful enough to justify expanding or sometimes even maintaining them. This leads to telling developers they must give up on a project they value and enjoyed (bad for their morale) or an abundance of tools that developers value but are too buggy to really rely on (bad for other people’s morale). This was specifically called out as a problem at Atlassian.
    • Employees on small teams are less likely to feel able to take 20% time, because they see the burden of core work shifting to their co-workers. But being on a small team already increases autonomy, so that may not matter.
  • Benefits of 20% time:
    • New products. This appears to work well for companies that make the kind of products software developers are naturally interested in, but not otherwise.
    • The gain in autonomy generally causes the improvements in morale and thus productivity that you’d expect (unless it backfires), but no one has quantified them.
    • Builds slack into the dev pipeline, such that emergencies can be handled without affecting customers.
    • Lets employees try out new teams before jumping ship entirely.
    • Builds cross-team connections that pay off in a number of ways, including testing new teams.
    • Gives developers a valve to overrule bug fixes and feature requests that their boss rejected from the official roadmap.
  • There are many things to do with 20% time besides new products.
    • Small internal tools, QOL improvements, etc (but see “costs”).
    • Learning, which can mean classes, playing with new tools, etc.
    • Decreasing technical debt.
    • Non-technical projects, e.g. charity drives.
  • Other notes:
    • One person suggested 20% time worked better at Google when it hired dramatically overqualified weirdos to work on mundane tech, and as they started hiring people more suited to the task with less burning desire to be working on something else, utilization and results decreased. 
    • 20% or even 120% time has outsized returns for industries that have very high capital costs but minimal marginal costs, such that employees couldn’t do them at home. This was a big deal at 3M (a chemical company) and, for the right kind of nerd, big data.

Thanks to the anonymous client for commissioning this research and allowing me to share it, and my Patreon patrons for funding my writing it up for public consumption.

Internet Literacy Atrophy

It’s the holidays, which means it’s also “teach technology to your elderly relatives” season. Most of my elderly relatives are pretty smart, and were technically advanced in their day. Some were engineers or coders back when that was rare. When I was a kid they were often early adopters of tech. Nonetheless, they are now noticeably worse at technology than my friends’ 3 year old. That kid figured out how to take selfie videos on my phone after watching me do it once, and I wasn’t even deliberately demonstrating. 

Meanwhile, my aunt (who was the first girl in her high school to be allowed into technical classes) got confused when attempting to use an HBOMax account I’d mostly already configured for her (I think she got confused by the new profile taste poll but I wasn’t there so I’ll never be sure). She pays a huge fee to use Go Go Grandparent instead of getting a smartphone and using Uber directly. I got excited when an uncle seemed to understand YouTube, until it was revealed that he didn’t know about channels and viewed the subscribe button as a probable trap. And of course, there was my time teaching my PhD statistician father how to use Google Sheets, which required learning a bunch of prerequisite skills he’d never needed before and I wouldn’t have had the patience to teach if it hadn’t benefited me directly. 

[A friend at a party claimed Apple did a poll on this and found the subscribe button to be a common area of confusion for boomers, to the point they were thinking of changing the “subscribe” button to “follow”. And honestly, given how coy substack is around what exactly I’m subscribing to and how much it costs, this isn’t unreasonable.]

The problem isn’t that my relatives were never competent with technology, because some of them very much were at one point. I don’t think it’s a general loss of intelligence either, because they’re still very smart in other ways. Also they all seem to have kept up with shopping websites just fine. But actions I view as atomic clearly aren’t for them.

Meanwhile, I’m aging out of being the cool young demographic marketers crave. New apps appeal to me less and less often. Sometimes something does look fun, like video editing, but the learning curve is so steep and I don’t need to make an Eye of The Tiger style training montage of my friends’ baby learning to buckle his car seat that badly, so I pass it by and focus on the millions of things I want to do that don’t require learning a new technical skill. 

Then I started complaining about YouTube voice, and could hear echoes of my dad in 2002 complaining about the fast cuts in the movie Chicago.

Bonus points: I watched this just now and found it painfully slow.

I have a hypothesis that I’m staring down the path my boomer relatives took. New technology kept not being worth it to them, so they never put in the work to learn it, and every time they fell a little further behind in the language of the internet – UI conventions, but also things like the interpersonal grammar of social media – which made the next new thing that much harder to learn. Eventually, learning new tech felt insurmountable to them no matter how big the potential payoff. 

I have two lessons from this. One is that I should be more willing to put in the time to learn new tech on the margin than I currently am, even if the use case doesn’t justify the time. Continued exposure to new conventions is worth it. I have several Millennial friends who are on TikTok specifically to keep up with the youths; alas, this does not fit in with my current quest for Quiet

I’ve already made substantial concessions to the shift from text to voice, consuming many more podcasts and videos than I used to and even appearing on a few, but I think I need to get over my dislike of recordings of my own voice to the point I can listen to them. I made that toddler training montage video even though iMovies is a piece of shit and its UI should die in a fire.This was both an opportunity to learn new skills and manufactured a future inspiration when things are hard.

Second: there’s a YouTube channel called “Dad, How Do I?” that teaches basic householding skills like changing a tire, tying a tie, or making macaroni and cheese. We desperately need the equivalent for boomers, in a form that’s accessible to them (maybe a simplified app? Or even start with a static website). “Child, how do I…?” could cover watching individual videos on YouTube, the concept of channels, not ending every text message with “…”, Audible, etc. Things younger people take for granted.  Advanced lessons could cover Bluetooth headphones and choosing your own electronics. I did some quick math and this is easily a $500,000/year business.

[To answer the obvious question: $500k/year is more than I make doing freelance research, but not enough more to cover the difference in impact and enjoyment. But if you love teaching or even just want to defray the cost of video equipment for your true passion, I think this is promising.]

My hope is that if we all work together to learn things, fewer people will be left stranded without access to technical tools, and also that YouTube voice will die out before it reaches something I care about.

Dear Self; We Need To Talk About Social Media

Last year I discovered, much to my chagrin, that always-on internet socializing was costly for me. This was inconvenient both because I’d spent rather a lot of time singing the praises of social media and instant messaging, and because we were in the middle of a global pandemic that had made online socializing an almost physical necessity. I made the decision at the time to put off changing my social media diet, and that was correct. But now there is in-person socializing again, and I’m changing how I use social media and messaging. I wanted to talk about this process and how great it was for me, but kept being nagged by the thought that the internet was full of essays about how the internet is bad, all of which I ignored or actively fought with, so what was going to make mine so special? 

I decided to use the one thing I had that none of the other writers did: a detailed understanding of my past self. So I wrote a letter to past me, explaining how social media was costlier than she knew (even though she was right about all of the benefits), and how she could test that for herself to make a more informed decision. To help as many Elizabeths as possible, I tried to make the letter cover a wide range in time, although in practice it’s mostly focused on post-smart-phone life.

Dear Past Elizabeth,

I know you have read a lot of things calling social media bad. Your reasons for disagreeing with them are correct: social media has been an incredible gift to you, you have dodged many of the problems they’re describing, and you’re right to value it highly. You’re also right that many of the people bragging about how hard they are to communicate with are anti-socially shifting the burden of communication to other people.

But.

Social media (and always-on instant messaging, which is a different, mostly worse, problem) has some costs you’re not currently tracking. I would like to help you understand those costs, so you can make different choices on the margin that leave you happier while preserving the benefits you get from social media, not all of which you’ve even experienced yet (is it 2015 yet? Approximately every job you get from this point on will have your blog as a partial cause. After 2017 you won’t even have interviews, people will just say “I read your blog”).

To be more specific: you have indeed curated your feed such that Facebook is not making you angry on purpose. You are not ruining relationships getting in public fights. You are not even ruining your mood from seeing dumb stuff very often. Much of what you see is genuinely interesting and genuinely connective, and that’s great. The people you connect with are indeed great, and you are successfully transitioning online connections into offline. I’m not asking you to give that up, just to track the costs associated with the gains, and see what you can do on the margins to get more benefits at less cost. To that end I’m going to give you a model of why internet socializing is costly, and some tools to track those costs.

I’m not sure how far back this letter is going, so I’m going to try to address a wide range of ways you might be right now. Also, if it’s late 2019 or early 2020, you can just put this letter on a shelf for a bit. If it’s mid 2020 and you’re confused by this, congratulations on being in the better timeline.

Currently you’re calculating your costs and benefits by measuring the difference in your mood from the time you receive a notification to the time you act on it. It’s true that that change is on average positive, and sometimes exceedingly so. But it ignores the change from the moment before you received the notification to the moment after. Notifications are pretty disruptive to deep thoughts, and you pay that cost before you even notice. But momentary disruptions aren’t even the whole cost, because the knowledge that interruptions could come at every time will change your mental state. 

It’s as if you had a system that delivered electric shocks to notify you that food was newly available. You are right that you need food to live, and a system that delivers it to you is good. But electric shocks are still unpleasant, and fear of electric shocks will limit the states you will allow your brain to get into. You can’t write off the costs of electric shocks just because food is good, and because most criticisms of the system focus on the food being bad. I know you’re on board with the general principle behind this analogy, because you already believe it for open offices, and that people who find open offices costless are fooling themselves. I’m so sorry to be the one to tell you that you are exactly the same, only with messaging instead of shared offices.

The easiest way to see this is to get yourself in a state where you can’t be interrupted, and observe your mood then. There is an incredibly beautiful, relaxing state I call Quiet that you are definitely not experiencing often enough. Once you have reached that state, you can observe how your mood changes as you move into a state where you can be interrupted, and again as you are interrupted. 

Noticing these changes and their signifiance requires a certain minimum level of ability to emotionally introspect. If you don’t have this yet, developing it is your highest priority- not just for concerns around social media, but for your life in general. Building emotional introspection was a very gradual process for me, so it’s hard to give you instructions. In this timeline I had guidance from specific individuals which may not be replicable, but something in the space of somatic experiencing therapy is probably helpful. Waking the Tiger and The Body Keeps the Score are the classically recommended books. They’re pretty focused on trauma, which is not actually the goal here, but oh well. Other people report success doing this with meditation, but it never seemed to work for me. 

Once you have that awareness, you want to practice getting in and out of Quiet so you can notice the changes in your feelings. I’ve included a few activities for producing Quiet, just to gesture at the concept, and a longer list at the end of this letter. 

Unless otherwise stated, a given activity needs to be the only thing you are doing, and you need to have disabled all potential interruptions, including self-inflicted interruptions like Facebook. For tasks that use electronics, this means either putting them in airplane mode or having a dedicated device that doesn’t get notifications. 

  • Put your phone on airplane mode and connect it to a bluetooth keyboard, so you can write without fear of interruption. 
    • Eventually you can buy a thing for this. It’s fine but not amazing. 
  • Learn a physical skill. Drawing on the Right Side of the Brain is good for absorption, and once you achieve a minimum skill level you can watch tutorials on youtube as long as you turn off every source of interruption.
    • Some of the frustration of drawing can be alleviated by getting an electronic device for drawing. I looked into this, and an iPad just is the best choice. You might want to have one of these ready to go by February 2020.
  • Read a book you’re really into (Kindle or physical). 
    • FYI, you should reread things more often. The hit rate on new books is quite low and some of your favorites are really good
  • If it’s an activity that leaves your hands open and you absolutely need something to do with your hands you can add in jigsaw puzzles, coloring, cardio exercise, or low-end cleaning work. 
    • Exercise in general is pretty good for Quiet, and you can even put on some entertainment, but it needs to be a single work you commit to, not all purpose access to your phone.

After you absorb yourself in one of these for a while (20-90 minutes), you’ll be in a very different state. Calmer, more focused, more serene. The volume on the world will be turned down. You’ll feel more yourself and less mixed with the rest of the world. Also you’ll crave Facebook like a heroin junkie. Give in to that. You just gave a weak muscle an intense workout and it’s appropriate to let it rest. As you do that, pay attention to which parts of you feel what ways. Something will be gained by using Facebook, but also something will be lost, and this is a time to learn those patterns so you can optimize your choices in the future. 

My guess is as time goes on you/I will build the muscle and spend more time in Quiet and less in noise. To be honest I haven’t gotten terribly far in that process, but it seems like the kind of thing that happens and I just can’t imagine the correct amount of online socializing for us is zero.

So far what I’ve talked about is mostly the dangers of apps that give notifications: alerts that draw your attention and thus incur a cost even if you dismiss them. You might be thinking “that doesn’t apply to social media, if I keep it closed by default and l only look when I feel like it.”. First of all, you are wrong. This is because you are not a unified agent: parts of you will want to check FB while other parts are hurt by it, and removing the option to do so will enable the FB-impaired parts to more fully relax (just like it’s easier to relax in an office with a door). But second, even if that weren’t true, social media has some inherent costs even when every individual post is incredibly valuable.

This is hard to describe and I’m mostly hoping you’ll notice it yourself once you pay attention and have something to contrast it with. But to gesture at the problem: every topic switch means booting up a new context, new thoughts, stores of existing information etc. Social media means doing this once every 4 seconds. You’ve avoided a lot of the classic pitfalls by studiously not reacting when Facebook showed you bad opinions, but by teaching it to only show you interesting things you’ve made the intellectual mosh pit aspect worse. At least Facebook gives you breathers in the form of baby photos: Twitter is non stop interesting dense things. 

Oh yeah, you’re gonna get into Twitter in 2020, and it will be the right decision. Yes, I’m very confident about 2020 in particular.

Anyways, I’m pretty sure the ideal amount of high-stimulus jumping between topics is not zero, but I’ve yet to get low enough to find the optimum. If you achieve Quiet and find yourself craving the stimulation of social media, and it feels good during and after, I think you should trust that. But I don’t think you’re capable of an informed decision on the tradeoff until you get more information.

In addition to the activities mentioned, a few tips and tricks that might make this whole process easier for you:

  • As you scale down your current process, you’ll lose the thing that makes you answer email and texts in a timely manner. Make sure to create a new habit of actually answering emails and texts at a chosen time.
  • You’re gonna worry that making yourself unreachable will make you miss messages that are genuinely urgent and important. There is a phone setting to let messages from certain people through, or any phone number that calls 2x in 15 minutes. It’s okay to use that. Your friends are not monsters, they will not abuse the privilege.
  • In general, you should be open to having more electronic devices that only do one thing: I know it seems dumb when your phone or laptop can already do the thing, but it really does change how you relate to the activity.
  • I’ve had off and on success with screen bedtime, in which I can stay up as late as I want, but I can’t look at a screen after a certain time. It provides a natural end to the day while respecting energy levels.
    • Kindles are not screens.
    • At some point you’re gonna start requiring podcasts to fall asleep, but you can preserve the spirit of screen bedtime by putting the phone in airplane mode ahead of time.
      • You’re not wrong that some horror podcasts have very soothing narrators you can fall asleep to. But somehow the only periods where I frequently wake up with nightmares are also the periods where I frequently fell asleep to horror podcasts. It’s not 1:1 causality but I do think it’s worse for us.
    • While we’re at it: the point of things you do after and just before going to bed is to help you fall asleep. Right before sleep is not the all purpose reading hour. Please pay enough attention to notice that reading deeply upsetting recent history books in bed disrupts your sleep.
  • Transitioning from noise to Quiet can be hard. You might think to skip the unpleasant transition phase by pursuing Quiet when you first wake up. I have yet to figure out how to pull this off: I’ll lie there half asleep indefinitely before getting the energy to read a book, audio will put me back to sleep. I have a sneaking suspicion that the disruptive chaotic nature of social media/messaging is also what makes it good for transitioning from half asleep to mostly awake. 
  • You are the only one who likes the Zune and the replacement will not be as conducive to unitasking. Unfortunately the realities of hardware support probably mean you can’t dodge this by stocking up ahead of time. I’m sorry, please enjoy the time you have.
  • Don’t go to Netflix or other streaming sites and look for something to entertain you. Maintain a watchlist on another site, and when you’re in the mood for a movie, figure out what kind of thing you’re in the mood for ahead of time and look for something on your list. This will prevent some serendipity, but the world is going to get much better at making things that look like they are for you but never pay off.
  • You’ll definitely enjoy work more if you turn off sources of interruptions.
  • Does that seem infeasible right now?  Does it seem like it won’t matter because your co-workers can just find you at the physical workplace you go to most days? I have such good news for you. The conconcordance between your brain and your work environment is going to get so much better. There will still be tension between “following a single train of thought to the end” and “following up on the multiple paths that train lays down”. I haven’t solved this one yet. But you have no idea how much less bullshit your work life is going to become.

To recap: I am suggesting the following plan:

  1. Try some of the activities on the Quiet list.
  2. If you don’t notice the difference between them and the intellectual mosh pit that is your day, train the ability to notice subtle mood differences, then go back to 1.
  3. Track the change in feeling between Quiet and a return to social internetting. 
  4. Do what feels good from there.

I hope this helps you become happier and more productive at a faster rate than I did,

  -Elizabeth, 2021

PS. please buy bitcoin

More Quiet activities

  • Feldenkrais (and only feldenkrais. No podcasts, no audiobooks, no tv. Sometimes you like to have close friends in the room while you do this to keep watch for monsters). Your starter resource for this is Guide To Better Movement; after that you can search on Youtube. As a bonus, feldenkrais is also on the list of things that will help you develop your ability to notice your own mood.
  • Video games work but also require a lot of executive function and that’s your ongoing bottleneck resource so I don’t strongly recommend them. Horror remains an unusually good genre for this, and your algorithm of playing the top 10% of puzzle games works pretty well.
    • Avoid anything that you need to tab out of to look stuff up, which will unfortunately hurt Subnautica, a game otherwise made just for you, significantly.
  • Diary writing.
  • Watch a single episode of a TV show without multitasking. 
    • Horror is especially good for this because the damage done by an interruption is so palpable.
    • I know this is hard because even very good movies can be just not stimulating enough. There’s no fix for that right now because your audio processing is so mediocre, but in a few years that’s gonna fix itself for no obvious reason and you’ll be listening to podcasts at 2x like it’s nothing. Once that happens you can use Video Speed Controller to speed things up. Don’t overuse this, you’ll ruin your goal of creating Quiet if you go too fast, but a 10-20% speed up is often unnoticeable.
    • Remember to either be in airplane mode or use a dedicated device that doesn’t have messaging on it.
  • Horror podcasts are also great, especially Magnus Archive if that’s around yet.
    • 20-30 minutes is the ideal length to start experiencing Quiet, which makes podcasts better than movies. Also they have a much better ratio of “time to figuring out if it is good” to “time after you know it’s good”.
    • TV horror anthologies meet the time constraint but just seem much worse on average than podcasts. More things to go wrong I guess.

I Don’t Know How To Count That Low

Back when I was at Google we had a phrase, “I don’t know how to count that low”. It was used to dismiss normal-company-sized problems as beneath our dignity to engage with: if you didn’t need 100 database shards scattered around the globe, were you even doing real work? 

It was used as a sign of superiority within Google, but it also pointed at a real problem: I once failed a job interview at a start-up when I wondered out loud if the DB was small enough to be held in memory, when it was several orders of magnitude lower than when I should even have begun worrying about that. I didn’t know the limit because it had been many years since I’d had a problem that could be solved with a DB small enough to be held in its entirety in memory. And they were right to fail me for that: the fact that I was good at solving strictly more difficult problems didn’t matter because I didn’t know how to solve the easier ones they actually had. I could run but not walk, and some problems require walking.

It’s a problem, but it can be a pleasant kind of problem to have, compared to others. Another example: my dad is a Ph.D. statistician who spent most of his life working in SAS, a powerful statistical programming language, and using “spreadsheet statistics” as a slur. When I asked permission to share this anecdote he sent me a list of ways Excel was terrible.

YARN | ♪ Here's an itemized list Of 30 years of disagreements ♪ | Hamilton  | Video gifs by quotes | 9d9bc627 | 紗

Then he started consulting for me, who was cruelly unwilling to pay the $9000 license fee for SAS when Google Sheets was totally adequate for the problem (WHO HAS FOOD AT HOME NOW DAD?!?).* 

My dad had to go through a horrible phase of being bad at the worse tool, and found a lot of encouragement when I reframed “I could have done this with one line in SAS and am instead losing to this error-riddled child’s toy”  to  “I didn’t know how to count that low, but now that it matters I am learning”. And then he tried hard and believed in himself and produced that analysis of that informal covid study that was wonderful statistically and super disappointing materially. And I retrained on smaller numbers and got that job at that start-up.

These are the starkest examples of how I’ve found “I don’t know how to count that low” useful. It reframes particularly undignified problems as signs of your capacity rather than incapacity, without letting you off the hook for solving them. Given how useful it’s been to me and how little I’ve seen of it in the wild, I’d like to offer this frame to others, to see if it’s useful for you as well.

*If any of you are going to bring up R: yes, it’s free, and yes, he has some experience with it, but not enough to be self-sufficient, I knew Sheets better, and I knew it was totally adequate for what we were doing or were likely to do in the future.

Appendix: I know you’re going to ask, so here is his abbreviated of grievances with Excel. Note that this was Excel in particular; I have no idea if it applies to Google Sheets. I also would allow that this must have been years ago and Excel could have gotten better, except AFAIK they never fixed the problem with reading genes as dates so they get no benefit of a doubt from me.

I attended a talk by a statistician at Microsoft.  He said that Microsoft had decided that there was no competitive advantage in making Excel statistics better because no statistician used it for serious problems except for data entry, so:

1. he was the only statistician at Microsoft
2. he knew of seven serious statistical problems in Excel, but they wouldn’t give him the money to fix them.
3. Excel’s problems fell into two categories:
3a. terrible numerical analysis:  it was widely verified if you took a number of single-digit numbers and calculated their standard deviation, and then took the same numbers and added a million to them, the standard deviation was often different, when it should be exactly the same.
3b.

statistical errors – like not understanding what you’re copying out of a textbook and getting it wrong.

Thanks to Ray Arnold and Duncan Sabien for beta-reading, and my dad for agreeing have his example shared.

Quick Look: Altitude and Child Development

A client came to me to investigate the effect of high altitude on child development and has given me permission to share the results. This post bears the usual marks of preliminary client work: I focused on the aspects of the question they cared about the most, not necessarily my favorite or the most important in general. The investigation stops when the client no longer wants to pay for more, not when I’ve achieved a particular level of certainty I’m satisfied with. Etc. In this particular case they were satisfied with the answer after only a few hours, and I did not pursue beyond that.

That out of the way: I investigated the impact of altitude on childhood outcomes, focusing on cognition. I ultimately focused mostly on effects visible at birth, because birth weight is such a hard to manipulate piece of data. What I found in < 3 hours of research is that altitude has an effect on birth weight that is very noticeable statistically, although the material impact is likely to be very small unless you are living in the Andes.

Children gestated at higher altitudes have lower birth weights

This seems to be generally supported by studies which are unusually rigorous for the field of fetal development. Even better, it’s supported in both South America (where higher altitudes correlate with lower income and lower density, and I suspect very different child-rearing practices) and Colorado (where the income relationship reverses and while I’m sure childhoods still differ somewhat, I suspect less so). The relationship also holds in Austria, which I know less about culturally but did produce the nicest graph.

This is a big deal because until you reach truly ridiculous numbers, higher birth weight is correlated with every good thing, although there’s reason to believe a loss due to high altitude is less bad than a loss caused by most other causes, which I’ll discuss later. 

[Also for any of you wondering if this is caused by a decrease in gestation time: good question, the answer appears to be no.]

Children raised at higher altitudes do worse on developmental tests 

There is a fair amount of data supporting this, and some even attempt to control for things like familiar wealth, prematurity, etc. I’m not convinced. The effects are modest, I expect families living at very high altitudes (typically rural) to be different in many ways from lower altitudes (typically urban) in ways that cause their children to score differently on tests without it making a meaningful impact on their life (and unlike birth weight, I didn’t find studies based in CO, where some trends reverse). Additionally, none of the studies looked specifically at children who were born at a lower altitude and moved, so some of the effects may be left over from the gestational effects discussed earlier. 

Hypoxia may not be your only problem

I went into this primed to believe reduced oxygen consumption was the problem. However, there’s additional evidence that UV radiation, which rises with altitude, may also be a concern. UV radiation is higher in some areas for other reasons, which indeed seems to correlate with reductions in cognition.

How much does this matter? (not much)

Based on a very cursory look at graphs on GIS (to be clear: I didn’t even check the papers, and their axes were shoddily labeled), 100 grams of birth weight corresponds to 0.2 IQ points for full term babies.

The studies consistently showed ~0.09 to 0.1 grams lower birth weight per meter of altitude. Studies showed this to be surprisingly linear; I’m skeptical and expect the reality to be more exponential or S shaped, but let’s use that rule of thumb for now. 0.1g/m means gestating in Denver rather than at sea level would shrink your baby by 170 grams (where 2500g-4500g is considered normal and healthy). If this was identical to other forms of fetal weight loss, which I don’t think it is, it would very roughly correspond to 0.35 IQ points lost. 

However, there’s reason to believe high-altitude fetal weight loss is less concerning than other forms. High altitude babies tend to have a higher brain mass percentage and are tall for their weight, suggesting they’ve prioritized growth amidst scarce resources rather than being straight out poisoned. So that small effect is even smaller than it first appears.

There was also evidence out of Austria that higher altitude increased risk of SIDS, but that disappeared when babies slept on their backs, which is standard practice now.

So gestating in Denver is definitely bad then? (No)

There are a billion things influencing gestation and childhood outcomes, and this is looking at exactly one of them, for not very long. If you are making a decision please look at all the relevant factors, and then factor in the streetlight effect that there may be harder to measure things pointing in the other direction. Do not overweight the last thing I happened to read.

In particular, Slime Mold Time Mold has some interesting data (which I haven’t verified but am hoping to at least ESC the series) that suggests higher altitudes within the US have fewer environmental contaminants, which you would expect to have all sorts of good effects.

Full notes available here.

Thanks to anonymous client for commissioning this research and Miranda Dixon-Luinenburg for copyediting.

Negative Feedback and Simulacra

Part 1: Examples

There’s a thing I want to talk about but it’s pretty nebulous so I’m going to start with examples. Feel free to skip ahead to part 2 if you prefer.

Example 1: Hot sauce

In this r/AmITheAsshole post, a person tries some food their their girlfriend cooked, likes it, but tries another bite with hot sauce. Girlfriend says this “…insults her cooking and insinuates that she doesn’t know how to cook”. 

As objective people not in this fight, we can notice that her cooking is exactly as good as it is whether or not he adds hot sauce. Adding hot sauce reveals information (maybe about him, maybe about the food), but cannot change the facts on the ground. Yet she is treating him like he retroactively made her cooking worse in a way that somehow reflects on her, or made a deliberate attempt to hurt her.

 

Example 2: Giving a CD back to the library

Back when I would get books on CD I would sometimes forget the last one in my drive or car. Since I didn’t use CDs that often, I would find the last CD sometimes months later. To solve this, I would drop the CD in the library book return slot, which, uh, no longer looks like a good solution to me, in part because of the time I did this in front of a friend and she questioned it. Not rudely or anything, just “are you sure that’s safe? Couldn’t the CD snap if something lands wrong?.” I got pretty angry about this, but couldn’t actually deny she had a point, so settled for thinking that if she had violated a friend code by not pretending my action was harmless. I was not dumb enough to say this out loud, but I radiated the vibe and she dropped it.

 

Example 3: Elizabeth fails to fit in at martial arts 

A long time ago I went to a martial arts studio. The general classes (as opposed to specialized classes like grappling) were preceded by an optional 45 minute warm up class. Missing the warm up was fine, even if you took a class before and after. Showing up 10 minutes before the general class and doing your own warm ups on the adjacent mats was fine too. What was not fine was doing the specialized class, doing your own warm ups on adjacent maps for the full 45 minutes while the instructor led regular warm ups, and then rejoining for the general class. That was “very insulting to the instructor”.

This was a problem for me because the regular warm ups hurt, in ways that clearly meant they were bad for me (and this is at a place I regularly let people hit me in the head). Theoretically I could have asked the instructor to give me something different, but that is not free and the replacements wouldn’t have been any better, which is not surprising because no one there had the slightest qualification to do personal training or physical therapy. So basically the school wanted me to pretend I was in a world where they were competent to create exercise routines, more competent than I despite having no feedback from my body, and considered not pretending disrespectful to the person leading warm ups.

Like the hot sauce example, the warm ups were as good as they were regardless of my participation – and they knew that, because they didn’t demand I participate. But me doing my own warm ups broke the illusion of competence they were trying to maintain.

 

Example 4: Imaginary Self-Help Guru

I listened to an interview where the guest was a former self-help guru who had recently shut down his school. Well, I say listened, but I’ve only done the first 25% so far. For that reason this should be viewed less as “this specific real person believes these specific things” and more like  “a character Elizabeth made up in her head inspired by things a real person said…” and. For that reason, I won’t be using his name or linking to the podcast.

Anyways, the actual person talked about how being a leader put a target on his back and his followers were never happy.  There are indeed a lot of burdens of leadership that are worthy of empathy, but there was an… entitled… vibe to the complaint. Like his work as a leader gave him a right to a life free of criticism.

If I was going to steel- man him, I’d say that there are lots of demands people place on leaders that they shouldn’t, such as “Stop reminding me of my abusive father” or “I’m sad that trade offs exist, fix it”. But I got a vibe that the imaginary guru was going farther than that; he felt like he was entitled to have his advice work, and people telling him it didn’t was taking that away from him, which made it an attack.

 

Example 5: Do I owe MAPLE space for their response?

A friend of mine (who has some skin in the meditation game) said things I interpreted as feeling very strongly that:

  1. My post on MAPLE was important and great and should be widely shared.
  2. I owed MAPLE an opportunity to read my post ahead of time and give me a response to publish alongside it (although I could have declined to publish it if I felt it was sufficiently bad).

Their argument, as I understood it at the time, was that even if I linked to a response MAPLE made later, N days worth of people would have read the post and not the response, and that was unfair.

I think this is sometimes correct- I took an example out of this post even though it required substantial rewrites, because I checked in with the people in question, found they had a different view, and that I didn’t feel sure enough of mine to defend it (full disclosure: I also have more social and financial ties to the group in question than I do to MAPLE).

I had in fact already reached out to my original contact there to let him know the post was coming and would be negative, and he passed my comment on to the head of the monastery. I didn’t offer to let him see it or respond, but he had an opportunity to ask (what he did suggest is a post in and of itself). This wasn’t enough for my friend- what if my contact was misrepresenting me to the head, or vice versa? I had an obligation to reach out directly to the head (which I had no way of doing beyond the info@ e-mail on their website) and explicitly offer him a pre-read and to read his response.

[Note: I’m compressing timelines a little. Some of this argument and clarification came in arguments about the principle of the matter after I had already published the post. I did share this with my friend, and changed some things based on their requests. On others I decided to leave it as my impression at the time we argued, on the theory that “if I didn’t understand it after 10 hours of arguing, the chances this correction actually improves my accuracy are slim”. I showed them a near-final draft and they were happy with it]

I thought about this very seriously. I even tentatively agreed (to my friend) that I would do it. But I sat with it for a day, and it just didn’t feel right. What I eventually identified as the problem was this: MAPLE wasn’t going to be appending my criticism to any of their promotional material. I would be shocked if they linked to me at all. And even if they did it wouldn’t be the equivalent, because my friend was insisting that I proactively seek out their response, where they had never sought out mine, or to the best of my knowledge any of their critics. As far as I know they’ve never included anything negative in their public facing material, despite at least one person making criticism extremely available to them. 

If my friend were being consistent (which is not a synonym for “good”) they would insist that MAPLE seek out people’s feedback and post a representative sample somewhere, at a minimum. The good news is: my friend says they’re going to do that next time they’re in touch. What they describe wanting MAPLE to create sounds acceptable to me. Hurray! Balance is restored to The Force! Except… assuming it does happen, why was my post necessary to kickstart this conversation?  My friend could have noticed the absence of critical content on MAPLE’s website at any time. The fact that negative reports trigger a reflex to look for a response and positive self-reports do not is itself a product of treating negative reports as overt antagonism and positive reports as neutral information.

[If MAPLE does link to my experience in a findable way on their website, I will append whatever they want to my post (clearly marked as coming from them). If they share a link on Twitter or something else transient, I will do the same] 

 

Part 2: Simulacrum

My friend Ben Hoffman talks about simulacra a lot, with this rough definition:

1. First, words were used to maintain shared accounting. We described reality intersubjectively in order to build shared maps, the better to navigate our environment. I say that the food source is over there, so that our band can move towards or away from it when situationally appropriate, or so people can make other inferences based on this knowledge.

2. The breakdown of naive intersubjectivity – people start taking the shared map as an object to be manipulated, rather than part of their own subjectivity. For instance, I might say there’s a lion over somewhere where I know there’s food, in order to hoard access to that resource for idiosyncratic advantage. Thus, the map drifts from reality, and we start dissociating from the maps we make.

3. When maps drift far enough from reality, in some cases people aren’t even parsing it as though it had a literal specific objective meaning that grounds out in some verifiable external test outside of social reality. Instead, the map becomes a sort of command language for coordinating actions and feelings. “There’s food over there” is perhaps construed as a bid to move in that direction, and evaluated as though it were that call to action. Any argument for or against the implied call to action is conflated with an argument for or against the proposition literally asserted. This is how arguments become soldiers. Any attempt to simply investigate the literal truth of the proposition is considered at best naive and at worst politically irresponsible.
But since this usage is parasitic on the old map structure that was meant to describe something outside the system of describers, language is still structured in terms of reification and objectivity, so it substantively resembles something with descriptive power, or “aboutness.” For instance, while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others, those privileges are still justified in terms of pseudo-consequentialist arguments about expertise in healing.

4. Finally, the pseudostructure itself becomes perceptible as an object that can be manipulated, the pseudocorrespondence breaks down, and all assertions are nothing but moves in an ever-shifting game where you’re trying to think a bit ahead of the others (for positional advantage), but not too far ahead.

If that doesn’t make sense, try this anonymous comment on the post

Level 1: “There’s a lion across the river.” = There’s a lion across the river.
Level 2: “There’s a lion across the river.” = I don’t want to go (or have other people go) across the river.
Level 3: “There’s a lion across the river.” = I’m with the popular kids who are too cool to go across the river.
Level 4: “There’s a lion across the river.” = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

In all five of my examples, people were given information (I like this better with hot sauce, you might break the library’s CD, these exercises hurt me and you are not qualified to fix it, your advice did not fix my problem, I had a miserable time at your retreat), and treated it as a social attack. This is most obvious in the first four, where someone literally says some version of “I feel under attack”, but is equally true in the last one, even though the enforcer was different than the ~victim and was attempting merely to tax criticism, not suppress it entirely. All five have the effect that there is either more conflict or less information in the world.

 

Part 3: But…

When I started thinking about this, I wanted a button I could push to make everyone go to level one all the time. It’s not clear that that’s actually a good idea, but even if it was, there is no button, and choosing/pretending to cut off your awareness of higher levels in order to maintain moral purity does you no good. If you refuse to conceive of why someone would tell you things other than to give you information, you leave yourself open to “I’m only telling you this to make you better” abuse. If you refuse to believe that people would lie except out of ignorance, you’ll trust when you shouldn’t. If you refuse to notice how people are communicating with others, you will be blindsided when they coordinate on levels you don’t see. 

But beating them at their own game doesn’t work either, because the enemy was never them, it was the game, which you are still playing. You can’t socially maneuver your way into a less political world. In particular, it’s a recent development that I would have noticed my friend’s unilateral demand for fairness as in fact tilted towards MAPLE. In a world where no one notices things like that, positive reviews of programs become overrepresented.

I don’t have a solution to this.  The best I can do right now is try to feed systems where level one is valued and higher levels are discussed openly.  “How do I find those?” you might ask. I don’t know. If you do, my email address is elizabeth – at – this domain name and I’d love to hear from you. You can also book a time to talk to me for an hour. What I have are a handful of 1:1 relationships where we have spent years building trust to get to the point where “I think you’re being a coward” is treated as genuine information, not a social threat, and mostly the other person has made the first move. 

The pieces of advice I do have are:

  1. If someone says they want honest feedback, err on the side of giving it to them. They are probably lying, but that’s their problem (unless they’re in a position to make it yours, in which case think harder about this).
  2. Figure out what you need to feel secure as someone confirms your worst fears about yourself and ask for it, even if it’s weird, even if it seems like an impossibly big ask. People you are compatible with will want to build towards that (not everyone who doesn’t is abusive or even operating in bad faith- but if you can’t start negotiations on this I’d be very surprised if you’re compatible).
  3. Be prepared for some sacrifices, especially in the congeniality department. People who are good at honesty under a climate that punishes it are not going to come out unscathed.

Literature Review: Distributed Teams

Introduction

Context: Oliver Habryka commissioned me to study and summarize the literature on distributed teams, with the goal of improving altruistic organizations. We wanted this to be rigorous as possible; unfortunately the rigor ceiling was low, for reasons discussed below. To fill in the gaps and especially to create a unified model instead of a series of isolated facts, I relied heavily on my own experience on a variety of team types (the favorite of which was an entirely remote company).

This document consists of five parts:

  • Summary
  • A series of specific questions Oliver asked, with supporting points and citations. My full, disorganized notes will be published as a comment.

My overall model of worker productivity is as follows:

Highlights and embellishments:

  • Distribution decreases bandwidth and trust (although you can make up for a surprising amount of this with well timed visits).
  • Semi-distributed teams are worse than fully remote or fully co-located teams on basically every metric. The politics are worse because geography becomes a fault line for factions, and information is lost because people incorrectly count on proximity to distribute information.
  • You can get co-location benefits for about as many people as you can fit in a hallway: after that you’re paying the costs of co-location while benefits decrease.
  • No paper even attempted to examine the increase in worker quality/fit you can get from fully remote teams.

Sources of difficulty:

  • Business science research is generally crap.
  • Much of the research was quite old, and I expect technology to improve results from distribution every year.
  • Numerical rigor trades off against nuance. This was especially detrimental when it comes to forming a model of how co-location affects politics, where much that happens is subtle and unseen. The most largest studies are generally survey data, which can only use crude correlations. The most interesting studies involved researchers reading all of a team’s correspondence over months and conducting in-depth interviews, which can only be done for a handful of teams per paper.

How does distribution affect information flow?

“Co-location” can mean two things: actually working together side by side on the same task, or working in parallel on different tasks near each other. The former has an information bandwidth that technology cannot yet duplicate. The latter can lead to serendipitous information sharing, but also imposes costs in the form of noise pollution and siphoning brain power for social relations.

Distributed teams require information sharing processes to replace the serendipitous information sharing. These processes are less likely to be developed in teams with multiple locations (as opposed to entirely remote). Worst of all is being a lone remote worker on a co-located team; you will miss too much information and it’s feasible only occasionally, despite the fact that measured productivity tends to rise when people work from home.

I think relying on co-location over processes for information sharing is similar to relying on human memory over writing things down: much cheaper until it hits a sharp cliff. Empirically that cliff is about 30 meters, or one hallway. After that, process shines.

List of isolated facts, with attribution:

  • “The mutual knowledge problem” (Cramton 2015):
    • Assumption knowledge is shared when it is not, including:
      • typical minding.
      • Not realizing how big a request is (e.g. “why don’t you just walk down the hall to check?”, not realizing the lab with the data is 3 hours away. And the recipient of the request not knowing the asker does not know that, and so assumes the asker does not value their time).
    • Counting on informal information distribution mechanisms that don’t distribute evenly
    • Silence can be mean many things and is often misinterpreted. E.g. acquiescence, deliberate snub, message never received.
  • Lack of easy common language can be an incredible stressor and hamper information flow (Cramton 2015).
  • People commonly cite overhearing hallway conversation as a benefit of co-location. My experience is that Slack is superior for producing this because it can be done asynchronously, but there’s reason to believe I’m an outlier.
  • Serendipitous discovery and collaboration falls off by the time you reach 30 meters (chapter 5), or once you’re off the same hallway (chapter 6)
  • Being near executives, project decision makers, sources of information (e.g. customers), or simply more of your peers gets you more information (Hinds, Retelny, and Cramton 2015)

How does distribution interact with conflict?

Distribution increases conflict and reduces trust in a variety of ways.

  • Distribution doesn’t lead to factions in and of itself, but can in the presence of other factors correlated with location
    • e.g. if the engineering team is in SF and the finance team in NY, that’s two correlated traits for fault lines to form around. Conversely, having common traits across locations (e.g. work role, being parents of young children)] fights factionalization (Cramton and Hinds 2005).
    • Language is an especially likely fault line.
  • Levels of trust and positive affect are generally lower among distributed teams (Mortenson and Neeley 2012) and even co-located people who work from home frequently enough (Gajendra and Harrison 2007).
  • Conflict is generally higher in distributed teams (O’Leary and Mortenson 2009Martins, Gilson, and Maynard 2004)
  • It’s easier for conflict to result in withdrawal among workers who aren’t co-located, amplifying the costs and making problem solving harder.
  • People are more likely to commit the fundamental attribution error against remote teammates (Wilson et al 2008).
  • Different social norms or lack of information about colleagues lead to misinterpretation of behavior (Cramton 2016) e.g.,
    • you don’t realize your remote co-worker never smiles at anyone and so assume he hates you personally.
    • different ideas of the meaning of words like “yes” or “deadline”.
  • From analogy to biology I predict conflict is most likely to arise when two teams are relatively evenly matched in terms of power/ resources and when spoils are winner take all.
  • Most site:site conflict is ultimately driven by desire for access to growth opportunities (Hinds, Retelny, and Cramton 2015). It’s not clear to me this would go away if everyone is co-located- it’s easier to view a distant colleague as a threat than a close one, but if the number of opportunities is the same, moving people closer doesn’t make them not threats.
  • Note that conflict is not always bad- it can mean people are honing their ideas against others’. However the literature on virtual teams is implicitly talking about relationship conflict, which tends to be a pure negative.

When are remote teams preferable?

  • You need more people than can fit in a 30m radius circle (chapter 5), or a single hallway. (chapter 6).
  • Multiple critical people can’t be co-located, e.g.,
    • Wave’s compliance officer wouldn’t leave semi-rural Pennsylvania, and there was no way to get a good team assembled there.
    • Lobbying must be based in Washington, manufacturing must be based somewhere cheaper.
    • Customers are located in multiple locations, such that you can co-locate with your team members or customers, but not both.
  • If you must have some team members not co-located, better to be entirely remote than leave them isolated. If most of the team is co-located, they will not do the things necessary to keep remote individuals in the loop.
  • There is a clear shared goal
  • The team will be working together for a long time and knows it (Alge, Weithoff, and Klein 2003)
  • Tasks are separable and independent.
  • You can filter for people who are good at remote work (independent, good at learning from written work).
  • The work is easy to evaluate based on outcome or produces highly visible artifacts.
  • The work or worker benefits from being done intermittently, or doesn’t lend itself to 8-hours-and-done, e.g.,
    • Wave’s anti-fraud officer worked when the suspected fraud was happening.
    • Engineer on call shifts.
  • You need to be process- or documentation-heavy for other reasons, e.g. legal, or find it relatively cheap to be so (chapter 2).
  • You want to reduce variation in how much people contribute (=get shy people to talk more) (Martins, Gilson, and Maynard 2008).
  • Your work benefits from long OODA loops.
  • You anticipate low turnover (chapter 2).

How to mitigate the costs of distribution

  • Site visits and retreats, especially early in the process and at critical decision points. I don’t trust the papers quantitatively, but some report site visits doing as good a job at trust- and rapport-building as co-location, so it’s probably at least that order of magnitude (see Hinds and Cramton 2014 for a long list of studies showing good results from site visits).
    • Site visits should include social activities and meals, not just work. Having someone visit and not integrating them socially is worse than no visit at all.
    • Site visits are more helpful than retreats because they give the visitor more context about their coworkers (chapter 2). This probably applies more strongly in industrial settings.
  • Use voice or video when need for bandwidth is higher (chapter 2).
    • Although high-bandwidth virtual communication may make it easier to lie or mislead than either in person or low-bandwidth virtual communication (Håkonsson et al 2016).
  • Make people very accessible, e.g.,
    • Wave asked that all employees leave skype on autoanswer while working, to recreate walking to someone’s desk and tapping them on the shoulder.
    • Put contact information in an accessible wiki or on Slack, instead of making people ask for it.
  • Lightweight channels for building rapport, e.g., CEA’s compliments Slack channel, Wave’s kudos section in weekly meeting minutes (personal observation).
  • Build over-communication into the process.
    • In particular, don’t let silence carry information. Silence can be interpreted a million different ways (Cramton 2001).
  • Things that are good all the time but become more critical on remote teams
  • Have a common chat tool (e.g., Slack or Discord) and give workers access to as many channels as you can, to recreate hallway serendipity (personal observation).
  • Hire people like me
    • long OODA loop
    • good at learning from written information
    • Good at working working asynchronously
    • Don’t require social stimulation from work
  • Be fully remote, as opposed to just a few people working remotely or multiple co-location sites.
  • If you have multiple sites, lumping together similar people or functions will lead to more factions (Cramton and Hinds 2005). But co-locating people who need to work together takes advantage of the higher bandwidth co-location provides..
  • Train workers in active listening (chapter 4) and conflict resolution. Microsoft uses the Crucial Conversations class, and I found the book of the same name incredibly helpful.

Cramton 2016 was an excellent summary paper I refer to a lot in this write up. It’s not easily available on-line, but the author was kind enough to share a PDF with me that I can pass on.

My full notes will be published as a comment on this post.