Dreamland: bad organic chemistry edition

I am in the middle of a post on Dreamland (Sam Quinones) and how it is so wrong, but honestly I don’t think I can wait that long so here’s an easily encapsulated teaser.

On page 39 Quinones says “Most drugs are easily reduced to water-soluble glucose…Alone in nature, the morphine molecule rebelled.”  I am reasonably certain that is horseshit.  Glucose contains three kinds of atoms- carbon, oxygen, and hydrogen.  The big three of organic chemicals.  Your body is incapable of atomic fusion, so the atoms it starts with are the atoms it ends up with, it can only rearrange them into different molecules.  Morphine is carbon, oxygen, hydrogen, and nitrogen, and that nitrogen has to go somewhere, so I guess technically you can’t reform it into just sugar.  But lots of other medications have non-big-3 atoms too (although, full disclosure, when I spot checked there was a lot less variety than I expected).

This valorization of morphine as the indigestible molecule is equally bizarre.  Morphine has a half-life of 2-3 hours (meaning that if you have N morphine in your body to start with, 2-3 hours later you will have N/2).  In fat that’s one of the things that makes it so addictive- you get a large spike, tightly tying it with the act of ingestion, and then it goes away quickly, without giving your body time to adjust.  Persistence is the opposite of morphine’s problem.

This is so unbelievably wrong I would normally assume the author meant something entirely different and I was misreading.  I’d love to check this, but the book cites no sources, and the online bibliography doesn’t discuss this particular factoid.  I am also angry at the book for being terrible in general, so it gets no charity here.

Talking about controversial things (discussion version)

There is a particular failure pattern I’ve seen in many different areas.  Society as a whole holds view A on subject X.  A small sub-group holds opposing view B.   Members of the sub group have generally put more thought into subject X and they have definitely spent more time arguing about it than the average person on the street.  Many A-believes have never heard of View B or the arguments for it before.

A relative stranger shows up at a gathering of the subgroup and begins advocating view A, or just questioning view B.  The sub-group assumes this is a standard person who has never heard their arguments and launches into the standard spiel.  B doesn’t listen, A gets frustrated and leaves the subgroup, since no one is going to listen to their ideas.

One possibility is that the stranger is an average member of society who genuinely believes you’ve gone your entire life without hearing the common belief and if they just say it slowly and loud enough you’ll come around.*  Another possibility is they understand view B very well and have some well considered objections to it that happen to sound like view A (or don’t sound that similar but the B-believer isn’t bothering to listen closely enough to find out).  They feel blown off and disrespected and leave.

In the former scenario, the worst case is that you lose someone you could have recruited.  Oh well.  If the latter, you lose valuable information about where you might be wrong.  If you always react to challenges this way you become everything hate.

For example: pop evolutionary psychology is awful and people are right to ignore it.  I spent years studying animal behavior and it gave me insights that fall under the broad category of evopsych, except for they are correct.  It is extremely annoying to have those dismissed with “no, but see, society influences human behavior.”

Note that B doesn’t have to be right for this scenario to play out.  Your average creationist or anti-vaxxer has thought more about the topic and spent more time arguing it than almost anyone.  If an ignorant observer watched a debate and chose a winner based on fluidity and citations they would probably choose the anti-vaxxer.  They are still wrong.

Or take effective altruism.  I don’t mind losing people who think measuring human suffering with numbers is inherently wrong.  But if we ignore that entire sphere we won’t hear the people who find the specific way we are talking dehumanizing, and have suggestions on how to fix that while still using numbers.  A recent facebook post made me realize that the clinical tone of most EA discussions plus a willingness to entertain all questions (even if the conclusion is abhorrent) is going to make it really, really hard for anyone with first hand experience of problems to participate.  First hand experience means Feelings means the clinical tone requires a ton of emotional energy even if they’re 100% on board intellectually.  This is going to cut us off from a lot of information.

There’s some low hanging fruit to improve this (let people talk before telling them they are wrong), but the next level requires listening to a lot of people be un-insightfully wrong, which no one is good at and EAs in particular have a low tolerance for.

Sydney and I are spitballing ideas to work on this locally.  I think it’s an important problem at the movement-level, but do not have time to take it on as a project.**  If you have thoughts please share.

*Some examples: “If you ate less and exercised more you’d lose weight.”  “If open offices bother you why don’t you use headphones?”, “but vaccines save lives.”, “God will save you…”/”God isn’t real”, depending on exactly where you are.

**Unexpected benefit of doing direct work: 0 pangs about turning down other projects.  I can’t do everything and this is not my comparative advantage.

Unquantified Self

Recently I did a CFAR workshop.  No one has settled on a good description of CFAR, but I think a good one would be “getting the different parts of your brain to coordinate with each other.”  The further I get from CFAR the more positively I view the experience, which suggests that I did the same thing with EA Global, which suggests I overestimated CFAR’s primary flaw (not being EA Global), which makes me view it even more positively.

CFAR suggests you go into the workshop with a problem to solve.  Fortunately but perhaps inconveniently, I went through a personal growth spurt right before CFAR.  It’s not that I was out of problems to solve, but the repercussions of the previous solutions had not yet worked their way through the system so it was hard to see what the next round would be.   Then I solved food.  For those of you who are just tuning in, I have/had lifelong medical issues that made food physically difficult, which made it psychologically difficult, which made the physical issues worse.  Clearing out all the anxiety around food in a weekend is not a small thing.  But to really achieve it’s full power I have to follow it up with things like “how do you choose food based on things other than terror?” and “stoves: how do they work?” So that’s a bunch more work.*

I left CFAR with some new things and some refinement on some old things.  I didn’t want to lose what I’d gotten at the workshop so I tried to do follow ups but I felt… full.  Defensive.  Like it was attempting to take up space in my brain and if it succeeded I would lose a lot of work.

My way of solving problems, which is either what CFAR teaches too or what I extracted from whatever CFAR actually does, is to understand them super well.  Eventually a solution just falls out and application is trivial.**  Some of this comes from focused thought, but there’s a lot of opportunistic noticing.  I store bits of information until suddenly they coalesce into a pattern.  As anyone who’s read Getting Things Done will tell you, storing information in your brain is expensive.  So I decided I needed a way to store all this opportunistic data, plus things from the conscious experiments I was running, to keep it all straight.

This is hard to do.  Take the comparatively simple “go to gym every day”.  There are 400 billion apps that will track this for me and I have never stuck with one of them, because they are boring and seeing the numbers go up doesn’t motivate me for more than a week.  More generally, I’ve never been able to get into quantified self because if I know what data to measure the problem is already solved.  I don’t really care how many calories I burned.  I do care what mental blocks inhibited me from going (bed so comfy, outside is cold, feeling like I stayed in bed too long and now I have to do Real Work) and how I maneuvered things so it didn’t take willpower to fight those (“remember how you feel much more productive after the gym and have an awesome job that doesn’t care when you work?”).  There is no app for that.

Then there are more difficult problems like “New information indicates I handled something 9 months ago really poorly, but I’m not sure what I’d do differently then with only the information I had at the time, without causing other problems.”  Or “My friend triggered an intense premortem that made me realize I’m ignoring information on project X just like I did with project Y last year, but I don’t know what that information is.”  I still don’t know what I’m going to do about the former, for the latter I tracked “things that feel like they’re hitting the same part of my brain” until a pattern emerged.  Tracking patterns for “things you are actively trying not to think about” is not cheap.

So I needed a system that could hold this information for me, that would show me information I didn’t realize was connected as I recorded it.  Without being cluttered.  The closest analogy I could come up with was an old timey naturalist.  They had a bunch of set things they knew they were looking for (what eats this flower), but also wanted to record cool things and then be able to connect to other cool things later (why are all these finches different yet similar?).    I don’t know how old timey naturalists did that with pen and paper because that did not work for me at all.  I tried workflowy and a google docs but just sat there frozen, unable to figure out how to sort the information.

My CFAR partner  Tilia Bell had a really good idea, which was to use a private wordpress blog.  I could give an entry as many tags as I wanted, and read tags when they felt relevant.  Or just the success tag, because winning feels nice.  This was a huge improvement, but wordpress is kind of clunky and annoying.  In particular, the tagging system does not flow at all.

I talked about it with Brian, who suggested a one person slack.  I could use channels for defined projects and tags for observations I wanted to connect later.  To be fair, this idea is three hours old.  On the other hand, in 20 minutes applying it I figured out what piece of information I was ignoring in that problem my brain didn’t want to look at.  I’m not saying it’s the sole cause, I’ve gathered a lot of information this past week.  But since “connecting things I already noticed” is pretty much its point, it seems promising.

*My nutritionist is finding me much easier to work with now.

**I’m exaggerating some but it’s more true than it has any right to be.

How Does Amazon Convince Anyone To Work For Them?

Amazon is in that club of employers (Google, Twitter, Facebook, Microsoft, etc), where working there functions as a stamp of quality.  Their employees are frequently cold called by recruiters working for other members of the club, middle tier companies, and start ups that cannot get enough people through their personal network.  Amazon pays very well relative to most jobs, even many programming jobs, but it does not pay as well as other members of the club.  The salary is just a little less than you’d make elsewhere, but equity and bonuses are backloaded such that many people are driven out before they receive the bulk of them.  The health insurance isn’t as good.  I realize paying for your own lunch is normal, but Amazon makes employees pay for a lot of things other companies offer for free, like ergonomic keyboards.  And then there’s the work environment.

How does Amazon maintain a talent pool equivalent to the other prestige club members while paying less?

This is anecdotal, but my friends at Amazon are much more likely to have come from unprestigious companies or schools than my friends at other club companies.  Working at Amazon doesn’t make them smarter, but it does provide widely-accepted proof of their intelligence that they didn’t have before, and can leverage into cushier jobs later.   In some ways Amazon’s reputation for chewing people up and spitting them out is a feature here, because leaving after 18 months raises 0 questions among other employers.

So my hypothesis is Amazon invests more in finding and vetting smart people who aren’t currently holding Official Smart Person Cards, and that part of employees’ compensation is getting that card.  In this way it’s like the US Armed Forces, which are grueling and don’t pay well but people tend to leave them with many more options than they started with.

I’m unconvinced this is a winning strategy.  Operational turnover is expensive, and bad working conditions decrease people’s productivity even when they’re well compensated.  But it does at least explain why it hasn’t collapsed already.

In Defense Of The Sunk Cost Fallacy

Dutch disease is the economic concept that if a country is too rich in one thing, especially a natural resource, every other sector of the economy will rot because all available money and talent will flow towards that sector.  Moreover, that sector dominates the exchange rate, making all other exports uncompetitive.*  It comes up in foreign development a lot because charitable aid can cause dutch disease: by paying what the funders would consider a “fair wage”, charities position themselves as by far the best employers in the area.  The best and the brightest African citizens end up chauffering foreigners rather than starting their own businesses, which keeps the society dependent on outside help.  Nothing good comes from having poverty as your chief export.

I posit that a similar process takes place in corporations.  Once they are making too much money off a few major things (Windows, Office, AdWords, SUVs), even an exceptionally profitable project in a small market is too small to notice.  Add in the risk of reputation damage and the fact that all projects have a certain amount of overhead regardless of size, and it makes perfect sense for large companies to discard projects a start up would kill for (RIP Reader).**

That’s a fine policy in moderation, but there are problems with applying it too early.  Namely, you never know what something is going to grow into.  Google search originally arose as a way to calculate impact for academic papers. The market for SUVs (and for that matter, cars) was 0 until someone created it.  If you insist on only going after projects that directly address an existing large market, the best you’ll ever be is a fast follower.***

Simultaneously, going from zero to an enormous, productive project is really, really hard (see: Fire Phone, Google+, Facebook’s not-an-operating-system).  Even if you have an end goal in mind, it often makes sense to start small and iterate.  Little Bets covers this in great detail.  And if you don’t have a signed card from G-d confirming your end goal is correct, progressing in small iterative steps gives you more information and more room to pivot.

More than one keynote at EA Global talked about the importance of picking the most important thing, and of being willing to switch if you find something better.  That’s obviously great in in some cases, but I worry that this hyperfocusing will cause the same problems for us that it does at large companies: a lack of room to surprise ourselves.  For example, take the post I did on interpretive labor.  I was really proud of that post.  I worked hard on it.  I had visions of it helping many people in their relationships.  But if you’d asked at the time, I would have predicted that the Most Effective use of my time was learning programming skills to increase my wage or increase my value in direct work, and that that post was an indulgence.   It never in my wildest dreams occurred to me it would be read by someone in a far better position than me to do something about existential risk and be useful to them in connecting two key groups that weren’t currently talking to each other, but apparently it did.  I’m not saying that I definitely saved us from papercliptopia, but it is technically possible that that post (along with millions of other flaps of butterfly wings) will make the marginal difference.  And I would never have even known it did so except the person in question reached out to me at EA Global.****

Intervention effectiveness may vary by several orders of magnitude, but if the confidence intervals are just as big it pays to add a little wiggle to your selection.  Moreover, constant project churn has its own cost: it’s better to finish the third best thing than have to two half finished attempts at different best things.  And you never know what a 3rd best project will teach you that will help an upcoming best project- most new technological innovations come from combining things from two different spheres (source), so hyperfocus will eventually cripple you.

In light of all that, I think we need to stop being quite so hard on the sunk cost fallacy.  No, you should not throw good money after bad, but constantly re-evaluating your choices is costly and (jujitsu flip) will not always be most efficient use of your resources.  In the absence of a signed piece of paper from G-d, biasing some of your effort towards things you enjoy and have comparative advantage in may in fact be the optimal strategy

Using your own efficiency against you

My hesitation is that I don’t know how far you can take this before it stops being effective altruism and starts being “feel smug and virtuous about doing whatever it is you already wanted to do”- a thing we’re already accused of doing.  Could someone please solve this and report back?  Thanks.

* The term comes from the Dutch economic crash following the discovery of natural gas in The Netherlands.  Current thought is that was not actually Dutch disease, but that renaming the phenomenon after some third world country currently being devastated by it would be mean.

*Simultaneously, developers have become worse predictors of the market in general. Used to be that nerds were the early adopters and if they loved it everyone would be using it in a year (e.g. gmail, smart phones).  As technology and particularly mobile advances, this is no longer true.  Nerds aren’t powerusers for tablets because we need laptops, but tablet poweruser is a powerful and predictive market.  Companies now force devs to experience the world like users (Facebook’s order to use Android) or just outright tell them what to do (Google+).  This makes their ideas inherently less valuable than they were.  I don’t blame companies for shifting to a more user-driven decision making process, but it does make things less fun.

**Which, to be fair, is Microsoft’s actual strategy

***It’s also possible it accomplished nothing, or makes it worse.  But the ceiling of effectiveness is higher than I ever imaged and the uncertainty only makes my point stronger.

An evolutionary explanation for homosexuality and homophobia

I’m just going to remind everyone that the fact that something is evolutionary selected for does not make it moral or right.  Explanations are not justifications, the naturalistic fallacy is a thing.  I’m exploring this in the spirit of “knowing why an immoral behavior exists makes it easier to extinguish”.  To be super clear: homosexuality is fine, homophobia and gender policing are wrong.  Hat tip to Dario Amodei for the original idea, although I’ve expanded it somewhat.

It’s obvious why homosexuality is an evolutionary puzzle: a drive for non-reproductive sex shouldn’t get very far.  But it’s equally weird that shaming of male homosexuality has been strongly selected for, culturally if not genetically.  Why would you shame someone for not competing with you for a thing you really want?  The correct response is to bring them a drink and skip off to that thing that’s so important to you.  I think I may have an answer to both questions.

Let’s talk about the side blotched lizard.  Male side blotched lizards come in three types.  Orange throated males are high in testosterone and maintain a large territory with multiple females.  They will chase off any competitor they find.  Yellow throated males dodge this by hanging around the outskirts and looking female; when the orange throat moves to a different part of the territory they rush and mate as quickly as possible.  Blue throated males bond with a single female and guard against her mating with a yellow throated male (who they will correctly identify as male because they have more time to devote to the project), but may have their female stolen by an orange throated male.  So the side blotched lizards are in an enormous game of rock paper scissors where orange beats blue beats yellow beats orange.

Side blotched lizards are not the only animal with this mating system.  Males of Paracerceis sculpta (a marine isopod) come in three forms: alpha males that control a harem, beta males that imitate females well enough to stay in the harem (and mate when the alpha isn’t looking) and gamma males, whose strategy can best be described as “be really small and run really fast”.   A very small number of male Ruffs also have a female-imitations strategy.

Genuine female imitation was probably not a viable reproductive strategy in the evolutionarily relevant time period, because most of the people you knew had known you for years.  But what if you could convince the high-status males that you were not a reproductive threat they might leave you alone long enough to sneak in a few matings.  Not as many as the high status men, but more than you could otherwise get while low status.  One way to convince them you’re not a reproductive threat is to convince them you’re sexually interested in men, and the easiest way to do that is to be exclusively interested in sex with men.

Preferring sex that can never produce babies does not sound like a winning offspring-production strategy at first glance, but evolution is weird.  One possibility is that homosexuality is an overshoot of the strategy, but if that were true you’d expect male bisexuality to be much more common than homosexuality, and the reverse is true (currently, in America).  Alternately, evolution could be counting on societal pressure leading gay men to have sex with women occasionally.  You know, like what happens now. Presumably not as much (reproductive) sex as straight men, but some.  The whole concept of a sexual and romantic orientation is kind of new, maybe the biological drives that lead to identifying as gay now led to preferring men but occasionally having sex with women in the past.

If this strategy persisted, evolution would develop countermeasures like, say, aggressive shaming of homosexual attraction and gender non-conforming behavior by men.  You know, like what happens now.  Hell, we have pick up artist culture and r/theredpill describing the problem in these exact terms.*

Some additional evidence: a man’s chance of being gay increases dramatically with the number of male pregnancies his mother carried (doubling as you go from 0 to 3 older brothers).*  There are a lot of possible explanations of this, but one would be to convince older brothers you’re not a threat/avoid putting too many familial eggs in one strategy basket.  The high cost of testosterone on the mother’s body probably also plays a role, but these are not mutually exclusive, it could be a lemons/lemonade situation.

One implication of this would be that homosexuality and gender-non-conforming behavior should be punished less as monogamy takes root, because high status men can’t do better than the single most fertile woman.  In practice, the highest status males can still mate with multiple women, but they lose the social ability to warn other men off them, making them more vulnerable to a sneaker strategy.  Additionally monogamy usually comes up as property and paternity rights become more important, which makes men more paranoid about potential threats to fidelity.  But I’ll acknowledge this is equivocal at best.

This is very far from proven and I can only think of very limited tests for it, but it is way more satisfying than the “cultures afraid to rock the boat” explanation.

*Forgetting that that piece is misandrist as hell and doesn’t seem to distinguish women from furniture, I’m annoyed by his math.  To make it work either the world is completely overrun with gamma and sigma males, or the average man has 2-3x as much sex as the average man.  Please get back to me when you’ve normalized (in the mathematical sense) your sexism.

*This is a reminder to stop using “genetic” when you mean “biologically determined” or “unchangeable”.

Filtering Labor

This is either a subset of interpretive labor or a closely related concept: Filtering Labor.  Suppose one person is generating information, and another person needs a small subset of it, or needs the information in aggregate but not specific pieces.  Who does the labor to filter it down?

Let’s talk about this in a work context.  Recently I was on a thread with four other people.  Everyone needed to get the original few letters, and everyone needed to know the final decision.  But in between those two were 5 or 10 e-mails nailing down some specifics between me and one other person.  The others needed to know the decisions we made, but reading the back and forth was of no value to them.  Nonetheless, they stayed on the thread.  This wouldn’t be such a big deal if they only checked their e-mail once a day because they could skim through the thread, but that’s not how much people check e-mail, and I know it’s not how these particular people were handling this thread, because there were other messages in the same thread that required and received a near-instant response from them.

We could have saved them effort by taking them off the thread, and re-adding them when they were needed, with a summary of the decision.  But that requires looking at every message and thinking “who needs to see this?” What if a message is mostly unrelated to them but not entirely?  How do I know if a decision is finalized enough to be worth summarizing it to them?  It didn’t apply in this particular case, but my general experience at work is that the best moment to send out the summary- when everything has been more or less settled- does not draw attention to itself.  You just go two days without having to make a decision.  Not to mention that knowing what is relevant to others requires information about them- who filters that?

This only gets worse as companies grow.  My job is clearly terrified there will be something somewhere in the company that would be useful to me and I won’t know about it.  One solution would be to make things easy to find when I wanted to look for them (a pull-based system).  You can do some of this with good archiving and search tools, but to make it really work it requires effort from the information producers or some sort of archivist.  Things like tagging, summaries, updating the wiki.  Information producers rarely want to put in this effort (in part because of a justified belief it’s just going to change again next week.  But by the time that’s identifiably not true, the relevant information has faded from memory).  You can attempt to force them but it hurts morale and it really is going to be obsolete in a week.

So my job, and I believe a lot of other large software companies, uses a push system.  I’m on dozens of email lists giving me a stream of updates on what people are doing, sometimes very far away in the org.  I tried to make a list to convey to you all the lists I am on, but it is impossible.  Making the collected output of these lists useful often requires a lot of interpretive labor (e.g. translating a changelist into what a tool actually does and how it might relevant to me).  That takes time, and the farther away from me something in the org is, the more time it takes.  At this point there is no way I could thoughtfully process all of my mail and get anything else done.  It looks like information is being spread more widely, but the signal to noise ratio is so low I’m learning less.

The open office is an attempt to do the same with in person interactions- if people won’t seek out others to give them information they need (in part because they don’t know who needs it or who has it), make it impossible to not overhear.  We know what I think about that.

Some of this comes from a failure to adapt to circumstances.  When you start a company everything anyone does is relevant to you and you will always know about it without any effort on anyone’s part.  As you add people everything is still pretty relevant to them, but it takes more effort to find out about it.  Start-ups start using stand-ups or mailing lists.   The bigger they get, the more effort goes into sharing information.  For a while this doesn’t cost a lot in processing.  People have a certain amount of slack in their day (compiling, between meetings) and everything is close enough to what they work on that it’s easy to interpret.  But peoples projects grow more and more distant, and eventually you run out of slack.  After that every additional piece of information you give them beyond that comes at the cost of them producing actual work.

Which doesn’t mean you should stop: maybe it saves more work than it creates.  But I wish companies recognized the effort this required and started thinking more strategically about what was truly useful. There are already specialists that do parts of this under various names (Project/program manager, technical writer, manager, tech lead), if this was made explicit I think we could save people a lot of effort.