I am in the middle of a post on Dreamland (Sam Quinones) and how it is so wrong, but honestly I don’t think I can wait that long so here’s an easily encapsulated teaser.
On page 39 Quinones says “Most drugs are easily reduced to water-soluble glucose…Alone in nature, the morphine molecule rebelled.” I am reasonably certain that is horseshit. Glucose contains three kinds of atoms- carbon, oxygen, and hydrogen. The big three of organic chemicals. Your body is incapable of atomic fusion, so the atoms it starts with are the atoms it ends up with, it can only rearrange them into different molecules. Morphine is carbon, oxygen, hydrogen, and nitrogen, and that nitrogen has to go somewhere, so I guess technically you can’t reform it into just sugar. But lots of other medications have non-big-3 atoms too (although, full disclosure, when I spot checked there was a lot less variety than I expected).
This valorization of morphine as the indigestible molecule is equally bizarre. Morphine has a half-life of 2-3 hours (meaning that if you have N morphine in your body to start with, 2-3 hours later you will have N/2). In fact that’s one of the things that makes it so addictive- you get a large spike, tied tightly it with the act of ingestion, and then it goes away quickly, without giving your body time to adjust. Persistence is the opposite of morphine’s problem.
This is so unbelievably wrong I would normally assume the author meant something entirely different and I was misreading. I’d love to check this, but the book cites no sources, and the online bibliography doesn’t discuss this particular factoid. I am also angry at the book for being terrible in general, so it gets no charity here.
There is a particular failure pattern I’ve seen in many different areas. Society as a whole holds view A on subject X. A small sub-group holds opposing view B. Members of the sub group have generally put more thought into subject X and they have definitely spent more time arguing about it than the average person on the street. Many A-believes have never heard of View B or the arguments for it before.
A relative stranger shows up at a gathering of the subgroup and begins advocating view A, or just questioning view B. The sub-group assumes this is a standard person who has never heard their arguments and launches into the standard spiel. B doesn’t listen, A gets frustrated and leaves the subgroup, since no one is going to listen to their ideas.
One possibility is that the stranger is an average member of society who genuinely believes you’ve gone your entire life without hearing the common belief and if they just say it slowly and loud enough you’ll come around.* Another possibility is they understand view B very well and have some well considered objections to it that happen to sound like view A (or don’t sound that similar but the B-believer isn’t bothering to listen closely enough to find out). They feel blown off and disrespected and leave.
In the former scenario, the worst case is that you lose someone you could have recruited. Oh well. If the latter, you lose valuable information about where you might be wrong. If you always react to challenges this way you become everything hate.
For example: pop evolutionary psychology is awful and people are right to ignore it. I spent years studying animal behavior and it gave me insights that fall under the broad category of evopsych, except for they are correct. It is extremely annoying to have those dismissed with “no, but see, society influences human behavior.”
Note that B doesn’t have to be right for this scenario to play out. Your average creationist or anti-vaxxer has thought more about the topic and spent more time arguing it than almost anyone. If an ignorant observer watched a debate and chose a winner based on fluidity and citations they would probably choose the anti-vaxxer. They are still wrong.
Or take effective altruism. I don’t mind losing people who think measuring human suffering with numbers is inherently wrong. But if we ignore that entire sphere we won’t hear the people who find the specific way we are talking dehumanizing, and have suggestions on how to fix that while still using numbers. A recent facebook post made me realize that the clinical tone of most EA discussions plus a willingness to entertain all questions (even if the conclusion is abhorrent) is going to make it really, really hard for anyone with first hand experience of problems to participate. First hand experience means Feelings means the clinical tone requires a ton of emotional energy even if they’re 100% on board intellectually. This is going to cut us off from a lot of information.
There’s some low hanging fruit to improve this (let people talk before telling them they are wrong), but the next level requires listening to a lot of people be un-insightfully wrong, which no one is good at and EAs in particular have a low tolerance for.
Sydney and I are spitballing ideas to work on this locally. I think it’s an important problem at the movement-level, but do not have time to take it on as a project.** If you have thoughts please share.
*Some examples: “If you ate less and exercised more you’d lose weight.” “If open offices bother you why don’t you use headphones?”, “but vaccines save lives.”, “God will save you…”/”God isn’t real”, depending on exactly where you are.
**Unexpected benefit of doing direct work: 0 pangs about turning down other projects. I can’t do everything and this is not my comparative advantage.
Recently I did a CFAR workshop. No one has settled on a good description of CFAR, but I think a good one would be “getting the different parts of your brain to coordinate with each other.” The further I get from CFAR the more positively I view the experience, which suggests that I did the same thing with EA Global, which suggests I overestimated CFAR’s primary flaw (not being EA Global), which makes me view it even more positively.
CFAR suggests you go into the workshop with a problem to solve. Fortunately but perhaps inconveniently, I went through a personal growth spurt right before CFAR. It’s not that I was out of problems to solve, but the repercussions of the previous solutions had not yet worked their way through the system so it was hard to see what the next round would be. Then I solved food. For those of you who are just tuning in, I have/had lifelong medical issues that made food physically difficult, which made it psychologically difficult, which made the physical issues worse. Clearing out all the anxiety around food in a weekend is not a small thing. But to really achieve it’s full power I have to follow it up with things like “how do you choose food based on things other than terror?” and “stoves: how do they work?” So that’s a bunch more work.*
I left CFAR with some new things and some refinement on some old things. I didn’t want to lose what I’d gotten at the workshop so I tried to do follow ups but I felt… full. Defensive. Like it was attempting to take up space in my brain and if it succeeded I would lose a lot of work.
My way of solving problems, which is either what CFAR teaches too or what I extracted from whatever CFAR actually does, is to understand them super well. Eventually a solution just falls out and application is trivial.** Some of this comes from focused thought, but there’s a lot of opportunistic noticing. I store bits of information until suddenly they coalesce into a pattern. As anyone who’s read Getting Things Done will tell you, storing information in your brain is expensive. So I decided I needed a way to store all this opportunistic data, plus things from the conscious experiments I was running, to keep it all straight.
This is hard to do. Take the comparatively simple “go to gym every day”. There are 400 billion apps that will track this for me and I have never stuck with one of them, because they are boring and seeing the numbers go up doesn’t motivate me for more than a week. More generally, I’ve never been able to get into quantified self because if I know what data to measure the problem is already solved. I don’t really care how many calories I burned. I do care what mental blocks inhibited me from going (bed so comfy, outside is cold, feeling like I stayed in bed too long and now I have to do Real Work) and how I maneuvered things so it didn’t take willpower to fight those (“remember how you feel much more productive after the gym and have an awesome job that doesn’t care when you work?”). There is no app for that.
Then there are more difficult problems like “New information indicates I handled something 9 months ago really poorly, but I’m not sure what I’d do differently then with only the information I had at the time, without causing other problems.” Or “My friend triggered an intense premortem that made me realize I’m ignoring information on project X just like I did with project Y last year, but I don’t know what that information is.” I still don’t know what I’m going to do about the former, for the latter I tracked “things that feel like they’re hitting the same part of my brain” until a pattern emerged. Tracking patterns for “things you are actively trying not to think about” is not cheap.
So I needed a system that could hold this information for me, that would show me information I didn’t realize was connected as I recorded it. Without being cluttered. The closest analogy I could come up with was an old timey naturalist. They had a bunch of set things they knew they were looking for (what eats this flower), but also wanted to record cool things and then be able to connect to other cool things later (why are all these finches different yet similar?). I don’t know how old timey naturalists did that with pen and paper because that did not work for me at all. I tried workflowy and a google docs but just sat there frozen, unable to figure out how to sort the information.
My CFAR partner Tilia Bell had a really good idea, which was to use a private wordpress blog. I could give an entry as many tags as I wanted, and read tags when they felt relevant. Or just the success tag, because winning feels nice. This was a huge improvement, but wordpress is kind of clunky and annoying. In particular, the tagging system does not flow at all.
I talked about it with Brian, who suggested a one person slack. I could use channels for defined projects and tags for observations I wanted to connect later. To be fair, this idea is three hours old. On the other hand, in 20 minutes applying it I figured out what piece of information I was ignoring in that problem my brain didn’t want to look at. I’m not saying it’s the sole cause, I’ve gathered a lot of information this past week. But since “connecting things I already noticed” is pretty much its point, it seems promising.
*My nutritionist is finding me much easier to work with now.
**I’m exaggerating some but it’s more true than it has any right to be.
Amazon is in that club of employers (Google, Twitter, Facebook, Microsoft, etc), where working there functions as a stamp of quality. Their employees are frequently cold called by recruiters working for other members of the club, middle tier companies, and start ups that cannot get enough people through their personal network. Amazon pays very well relative to most jobs, even many programming jobs, but it does not pay as well as other members of the club. The salary is just a little less than you’d make elsewhere, but equity and bonuses are backloaded such that many people are driven out before they receive the bulk of them. The health insurance isn’t as good. I realize paying for your own lunch is normal, but Amazon makes employees pay for a lot of things other companies offer for free, like ergonomic keyboards. And then there’s the work environment.
How does Amazon maintain a talent pool equivalent to the other prestige club members while paying less?
This is anecdotal, but my friends at Amazon are much more likely to have come from unprestigious companies or schools than my friends at other club companies. Working at Amazon doesn’t make them smarter, but it does provide widely-accepted proof of their intelligence that they didn’t have before, and can leverage into cushier jobs later. In some ways Amazon’s reputation for chewing people up and spitting them out is a feature here, because leaving after 18 months raises 0 questions among other employers.
So my hypothesis is Amazon invests more in finding and vetting smart people who aren’t currently holding Official Smart Person Cards, and that part of employees’ compensation is getting that card. In this way it’s like the US Armed Forces, which are grueling and don’t pay well but people tend to leave them with many more options than they started with.
I’m unconvinced this is a winning strategy. Operational turnover is expensive, and bad working conditions decrease people’s productivity even when they’re well compensated. But it does at least explain why it hasn’t collapsed already.
Dutch disease is the economic concept that if a country is too rich in one thing, especially a natural resource, every other sector of the economy will rot because all available money and talent will flow towards that sector. Moreover, that sector dominates the exchange rate, making all other exports uncompetitive.* It comes up in foreign development a lot because charitable aid can cause dutch disease: by paying what the funders would consider a “fair wage”, charities position themselves as by far the best employers in the area. The best and the brightest African citizens end up chauffering foreigners rather than starting their own businesses, which keeps the society dependent on outside help. Nothing good comes from having poverty as your chief export.
I posit that a similar process takes place in corporations. Once they are making too much money off a few major things (Windows, Office, AdWords, SUVs), even an exceptionally profitable project in a small market is too small to notice. Add in the risk of reputation damage and the fact that all projects have a certain amount of overhead regardless of size, and it makes perfect sense for large companies to discard projects a start up would kill for (RIP Reader).**
That’s a fine policy in moderation, but there are problems with applying it too early. Namely, you never know what something is going to grow into. Google search originally arose as a way to calculate impact for academic papers. The market for SUVs (and for that matter, cars) was 0 until someone created it. If you insist on only going after projects that directly address an existing large market, the best you’ll ever be is a fast follower.***
Simultaneously, going from zero to an enormous, productive project is really, really hard (see: Fire Phone, Google+, Facebook’s not-an-operating-system). Even if you have an end goal in mind, it often makes sense to start small and iterate. Little Bets covers this in great detail. And if you don’t have a signed card from G-d confirming your end goal is correct, progressing in small iterative steps gives you more information and more room to pivot.
More than one keynote at EA Global talked about the importance of picking the most important thing, and of being willing to switch if you find something better. That’s obviously great in in some cases, but I worry that this hyperfocusing will cause the same problems for us that it does at large companies: a lack of room to surprise ourselves. For example, take the post I did on interpretive labor. I was really proud of that post. I worked hard on it. I had visions of it helping many people in their relationships. But if you’d asked at the time, I would have predicted that the Most Effective use of my time was learning programming skills to increase my wage or increase my value in direct work, and that that post was an indulgence. It never in my wildest dreams occurred to me it would be read by someone in a far better position than me to do something about existential risk and be useful to them in connecting two key groups that weren’t currently talking to each other, but apparently it did. I’m not saying that I definitely saved us from papercliptopia, but it is technically possible that that post (along with millions of other flaps of butterfly wings) will make the marginal difference. And I would never have even known it did so except the person in question reached out to me at EA Global.****
Intervention effectiveness may vary by several orders of magnitude, but if the confidence intervals are just as big it pays to add a little wiggle to your selection. Moreover, constant project churn has its own cost: it’s better to finish the third best thing than have to two half finished attempts at different best things. And you never know what a 3rd best project will teach you that will help an upcoming best project- most new technological innovations come from combining things from two different spheres (source), so hyperfocus will eventually cripple you.
In light of all that, I think we need to stop being quite so hard on the sunk cost fallacy. No, you should not throw good money after bad, but constantly re-evaluating your choices is costly and (jujitsu flip) will not always be most efficient use of your resources. In the absence of a signed piece of paper from G-d, biasing some of your effort towards things you enjoy and have comparative advantage in may in fact be the optimal strategy
My hesitation is that I don’t know how far you can take this before it stops being effective altruism and starts being “feel smug and virtuous about doing whatever it is you already wanted to do”- a thing we’re already accused of doing. Could someone please solve this and report back? Thanks.
* The term comes from the Dutch economic crash following the discovery of natural gas in The Netherlands. Current thought is that was not actually Dutch disease, but that renaming the phenomenon after some third world country currently being devastated by it would be mean.
*Simultaneously, developers have become worse predictors of the market in general. Used to be that nerds were the early adopters and if they loved it everyone would be using it in a year (e.g. gmail, smart phones). As technology and particularly mobile advances, this is no longer true. Nerds aren’t powerusers for tablets because we need laptops, but tablet poweruser is a powerful and predictive market. Companies now force devs to experience the world like users (Facebook’s order to use Android) or just outright tell them what to do (Google+). This makes their ideas inherently less valuable than they were. I don’t blame companies for shifting to a more user-driven decision making process, but it does make things less fun.
**Which, to be fair, is Microsoft’s actual strategy
***It’s also possible it accomplished nothing, or makes it worse. But the ceiling of effectiveness is higher than I ever imaged and the uncertainty only makes my point stronger.
This is either a subset of interpretive labor or a closely related concept: Filtering Labor. Suppose one person is generating information, and another person needs a small subset of it, or needs the information in aggregate but not specific pieces. Who does the labor to filter it down?
Let’s talk about this in a work context. Recently I was on a thread with four other people. Everyone needed to get the original few letters, and everyone needed to know the final decision. But in between those two were 5 or 10 e-mails nailing down some specifics between me and one other person. The others needed to know the decisions we made, but reading the back and forth was of no value to them. Nonetheless, they stayed on the thread. This wouldn’t be such a big deal if they only checked their e-mail once a day because they could skim through the thread, but that’s not how much people check e-mail, and I know it’s not how these particular people were handling this thread, because there were other messages in the same thread that required and received a near-instant response from them.
We could have saved them effort by taking them off the thread, and re-adding them when they were needed, with a summary of the decision. But that requires looking at every message and thinking “who needs to see this?” What if a message is mostly unrelated to them but not entirely? How do I know if a decision is finalized enough to be worth summarizing it to them? It didn’t apply in this particular case, but my general experience at work is that the best moment to send out the summary- when everything has been more or less settled- does not draw attention to itself. You just go two days without having to make a decision. Not to mention that knowing what is relevant to others requires information about them- who filters that?
This only gets worse as companies grow. My job is clearly terrified there will be something somewhere in the company that would be useful to me and I won’t know about it. One solution would be to make things easy to find when I wanted to look for them (a pull-based system). You can do some of this with good archiving and search tools, but to make it really work it requires effort from the information producers or some sort of archivist. Things like tagging, summaries, updating the wiki. Information producers rarely want to put in this effort (in part because of a justified belief it’s just going to change again next week. But by the time that’s identifiably not true, the relevant information has faded from memory). You can attempt to force them but it hurts morale and it really is going to be obsolete in a week.
So my job, and I believe a lot of other large software companies, uses a push system. I’m on dozens of email lists giving me a stream of updates on what people are doing, sometimes very far away in the org. I tried to make a list to convey to you all the lists I am on, but it is impossible. Making the collected output of these lists useful often requires a lot of interpretive labor (e.g. translating a changelist into what a tool actually does and how it might relevant to me). That takes time, and the farther away from me something in the org is, the more time it takes. At this point there is no way I could thoughtfully process all of my mail and get anything else done. It looks like information is being spread more widely, but the signal to noise ratio is so low I’m learning less.
The open office is an attempt to do the same with in person interactions- if people won’t seek out others to give them information they need (in part because they don’t know who needs it or who has it), make it impossible to not overhear. We know what I think about that.
Some of this comes from a failure to adapt to circumstances. When you start a company everything anyone does is relevant to you and you will always know about it without any effort on anyone’s part. As you add people everything is still pretty relevant to them, but it takes more effort to find out about it. Start-ups start using stand-ups or mailing lists. The bigger they get, the more effort goes into sharing information. For a while this doesn’t cost a lot in processing. People have a certain amount of slack in their day (compiling, between meetings) and everything is close enough to what they work on that it’s easy to interpret. But peoples projects grow more and more distant, and eventually you run out of slack. After that every additional piece of information you give them beyond that comes at the cost of them producing actual work.
Which doesn’t mean you should stop: maybe it saves more work than it creates. But I wish companies recognized the effort this required and started thinking more strategically about what was truly useful. There are already specialists that do parts of this under various names (Project/program manager, technical writer, manager, tech lead), if this was made explicit I think we could save people a lot of effort.
If one person is wrong, they’re wrong. If a lot of people, some of whom got extremely rich off of their wrong ideas, are wrong, there’s a good possibility I’m the wrong one. At a minimum, it’s useful for me to understand where I’m differing from others. Open offices are one such puzzle. To me, they are obviously one step short of Azkaban. And yet everyone, including some exceptionally profitable companies, uses them. Why?
[I’m going to restrict myself to tech companies because that’s what I know]
One possibility is some people genuinely prefer them. I keep talking myself up to that, only to read another article about how everyone is miserable and unproductive in them. I talk myself up again, and find a peer reviewed study detailing their terribleness. I thought maybe they were for extroverts, but then I heard extroverts complain they couldn’t get any work done in them either (although they were having a lot more fun not working in them than I was). My friends’ defenses of them/explanations of how they make it work sound more like Stockholm Syndrome, or at best the way I sound when I find a shortcut to finish a useless but mandatory 30 minute training in 5 minutes. I noticeably improved my situation relative to the 30 minute scenario, but that doesn’t mean those 5 minutes were valuable. But let’s assume my friends are a non-random subset and there are people who thrive in (some) open offices. That’s great, if you hire those specific people. One of my major frustrations with my current employer, Stark Industries, is that their interview process (closed room, no distractions, puzzles to solve on your own) is designed to filter in exactly people like me, and the work environment (completely open, constant distractions, work that sometimes feels more like being a PM* than a programmer) couldn’t be better designed to make us unproductive.
One possible justification for open offices is cost. I certainly think that’s a larger factor than many companies admit, but if that were the only concern they’d convert to entirely work from home. Moreover, engineers are really, really expensive, and making us less productive is costly. The extra space necessary for doors or cubicles could easily pay for itself. A slightly different explanation is that even if companies were willing to buy doored offices, acquiring office space is lumpier than hiring. Having more than you need is expensive and it takes time to ramp up after a hiring spree. That could explain temporary open offices, or roommates, but not stable ones.
Let’s go back one paragraph. The open office isn’t the only thing I dislike about Stark Industries. I’m also continually baffled by the fact that my technology company has a workflow designed around synchronous communication, in person if at all possible. No one has time to answer email or IM thoughtfully because they’re running from one meeting to another, so if you want a response from someone you schedule a meeting. The correct response to someone ignoring your e-mail is to ambush them in the hallway or, if they’re at a different site, schedule a videoconference. It took me a very long time to get this, but making a meeting to do something that could have been handled over email is not a failure mode at Stark Industries. This is how they expect it to work. This must be how they want it to work, because instant messaging is a strictly easier technical problem than helicarriers, a project we also do. Information is exchanged at meetings, which means everyone has to process it at the same time and either everyone moves at the speed of the slowest person* or you leave them behind.
What if the open office and the synchronicity are not a coincidence? If you believe synchronicity is helpful (which Tony Stark clearly does, and which I agree with in some instances), then you’ll want to encourage it. But as noted above, this is not the natural mode for a wide swath of programmers. You can hire for it at first, but eventually that cuts you off from too much talent. Any one individual can be forced to switch modes by being embedded in a group full of the other, but there aren’t enough synchronizers to absorb all the asynchronizers.
But… as much some people like retreating to do their own thing, they also like it when other people respond to them immediately. They may be held back by empathy, but they’d still like the answer right away. In an open office, the barriers to demanding an answer are reduced. For one, you don’t have to leave your chair. For two, offices and even cubicles have a sense of personal bubble. You wait to be invited in, and it’s expected you’ll have to wait until they reach a breaking point. After extensive experimentation I can tell you there is no way to generate that bubble at Stark Industries, and I assume open offices in general. I once had a co-worker poke his head into the conference room I was hiding in for the sole purpose of asking if I was hiding so I could concentrate.** Open offices also lower the cost to any one interruption. They do it by interrupting you so constantly you never get into a groove that could be interrupted, but they do technically lower it.*** So even the highly empathetic will feel less reluctance to interrupt co-workers because they are correctly calculating a lower cost to it. In high doses, perhaps mixed with morale events and a culture that emphasizes meetings over email, this could lead to teams made entirely of asynchronous workers forcing synchronicity on themselves.
What is it about synchronicity that makes every major tech company started in the last 20 years be willing to pay so much for it? Based on every survey ever and the coding wars study, it’s not improved performance at the object-level tasks of the job. But work isn’t school, there’s more to it than fulfilling the terms of the assignment. Maybe open offices lead to less redundancy or wasted work. Maybe they make charisma and personal connections less important. Maybe they’re the best way to force programmers to share information in the face of their steadfast refusal to write anything down. That not only makes people more potentially more productive, it makes them more replaceable.
None of this makes me love open offices. For one I’m pretty sure I’m better at synchronizing via technology than speech. By a lot. I love Slack because it gives me everything everyone said I would get from open offices, without any of the costs. It gives me a sense of control and in-touch-ness that makes me want to read it. Meanwhile I approach co-workers in person less now than I did when we all had doors, because I’m hyperconscious of impinging on the other people in the room. But I will say I started doing better at my job when I acknowledged that I was expected to do it synchronously and rolled with it. Matching the office work style turned out to be more important to productivity than matching my own. It exhausts me, but at least it’s the exhaustion of having worked really hard. When I tried to work asynchronously I came home exhausted from doing nothing, which was a much worse feeling.
*Project/program manager. Job description depends heavily on the team but one of their jobs is to coordinate people with subject matter knowledge.
*Slowest doesn’t mean dumbest. They may very well take longer because they’re thinking more deeply.
**The answer was yes.
***The economic term for this is bee sting theory. You’ll work really hard to avoid your first bee sting, and you’ll pay a lot to get rid of it. But when you already have 10, the work to avoid an 11th just doesn’t seem worth it.