The Boring Part of Bell Labs

It took me a long time to realize that Bell Labs was cool. You see, my dad worked at Bell Labs, and he has not done a single cool thing in his life except create me and bring a telescope to my third grade class. Nothing he was involved with could ever be cool, especially after the standard set by his grandfather who is allegedly on a patent for the television. 

It turns out I was partially right. The Bell Labs everyone talks about is the research division at Murray Hill. They’re the ones that invented transistors and solar cells. My dad was in the applied division at Holmdel, where he did things like design slide rulers so salesmen could estimate costs.

[Fun fact: the old Holmdel site was used for the office scenes in Severance]

But as I’ve gotten older I’ve gained an appreciation for the mundane, grinding work that supports moonshots, and Holmdel is the perfect example of doing so at scale. So I sat down with my dad to learn about what he did for Bell Labs and how the applied division operated. 

I expect the most interesting bit of this for other people is Bell Labs’ One Year On Campus program, in which they paid new-grad employees to earn a master’s degree on the topic of Bell’s choosing. I would have loved to do a full post on OYOC, but it’s barely mentioned online and my only sources are 3 participants with the same degree. If you were a manager who administered OYOC  or at least used it for a degree in something besides Operations Research, I’d love to talk to you (elizabeth@northseaanalytics.com).

And now, the interview

Elizabeth: How did you get started at Bell Labs?

Craig: In 1970 I was about to graduate from Brown with a ScB in Applied Math. I had planned to go straight to graduate school, and been accepted, but I thought I might as well interview with Bell Labs when they came to campus.  That was when I first heard of the One Year On Campus program, where Bell Labs would send you to school on roughly 60% salary and pay for your tuition and books, to get a Masters degree.  Essentially, you got a generous fellowship and didn’t have to work as a teaching or research assistant, so it was a great deal.  I  got to go to Cornell where I already wanted to go, in the major I wanted, operations research.   

Over 130 people signed up for the One Year On Campus program in 1970.  That was considerably more than Bell Labs had planned on; there was a mild recession and so more people accepted than they had planned.  They didn’t retract any job offers, but the next year’s One Year On Campus class was much smaller, so I was lucky.

The last stage in applying was taking a physical at the local phone operating company.    Besides the usual checks, you had to look into a device that had two lighted eyepieces.  I looked in and recognized that I was seeing a cage in my left eye and a lion in my right eye.  But I also figured out this was a binocular vision test and I was supposed to combine the two images and see the lion in the cage, so that’s what I said I saw.   It’s unclear if Bell Labs cared about this, or this was the standard phone company test for someone who might be driving a phone truck and needed to judge distances.  Next time I went to an eye doctor, I asked about this; after some tests, he said I had functional but non-standard depth perception.    

What did you do for Bell Labs?

I worked in the Private Branch Exchange area.  Large and medium size companies would have small telephone exchanges that sat in their buildings.  It would be cheaper for them because most of the calls would be within the building rather than to the outside world, and thus avoid sending the call to a regular exchange a number of miles away and then back to the building.  You could also have special services, like direct lines to other company locations that you could rent and were cheaper than long distance charges.  The companies supplied their own phone operators and the operating companies were responsible for training, and the equipment and its maintenance and upgrades.  

Most calls went through automatically e.g. if you knew the number.  But some would need an operator.  Naturally, the companies didn’t want to hire more operators than they needed to.  The operating company would do load measurements and, if the number of calls that needed an operator followed a Poisson distribution (so the inter-arrival times were exponential).

The length of time an operator took to service the call followed an exponential distribution. In theory, one could use queuing theory to get an analytical answer to how many operators you needed to provide to get reasonable service.  However, there was some feeling that real phone traffic had rare but lengthy tasks (the company’s president wanted the operator to call around a number of shops to find his wife so he could make plans for dinner (this is 1970)) that would be added on top of the regular Poisson/exponential traffic and these special calls might significantly degrade overall operator service.

I turned this into my Master’s thesis. Using a simulation package called GPSS (General Purpose Simulation System, which I was pleasantly surprised to find still exists) I ran simulations for a number of phone lines and added different numbers of rare phone calls that called for considerable amounts of operator time. What we found was that occasional high-demand tasks did not disrupt the system and did not need to be planned for. 

Some projects I worked on:

  • A slide rule for salesmen to estimate prices on site, instead of making clients wait until the salesman could talk to engineering.
  • Inventory control for parts for PBX.
  • I worked with a Ph. D.  mathematician on a complicated call processing problem.  I ran a computer simulation and he expanded the standard queuing theory models to cover some of the complexities of reality.  We compared results and they were reasonably similar. 

Say more about inventory control?

The newest models of PBX’s had circuit packs (an early version of circuit boards), so that if a unit failed, the technician could run diagnostics and just replace the defective circuit pack.  The problem was technicians didn’t want to get caught without a needed circuit pack, so each created their own off-the-books safety stocks of circuit packs.  The operating company hated this because the circuit packs were expensive, driving up inventory costs, and further, because circuit packs were being constantly updated, many off-the-book circuit packs were thrown out without ever having been used.  One operating  company proceeded with inspections, which one technician countered by moving his personal stock to his home garage. 

This was a classical inventory control problem, a subcategory of queuing theory.  I collected data on usage of circuit packs and time to restock, and came up with stocking levels and reorder points.  Happily, the usual assumptions worked out well.  After a while, the technicians were convinced they were unlikely to get caught short, the company was happy that they had to buy fewer circuit packs and they were accessible to all the technicians.  Everyone was happier.

And the slide rule?

While I was in graduate school, I became interested in piecewise linear regression (aka segmented regression), where at one or more points the regression line changes slope, jumps (changing its intercept) or both. 

I considered working on PLR for my Ph.D. dissertation.  On my summer job back, I saw a great fit with a project.  Salespeople would go out to prospective PBX customers but be unable to give them a quick and dirty cost estimate for a given number of phone lines, traffic load, etc.  It was complicated, because there were discontinuities:  for example, you could cover n phones with one control unit, so costs would go up linearly with each additional phone.  But if you had n +1, you had to have two control units and there would be a noticeable jump in costs.  There were a number of wrinkles like this. So the salesperson would have to go back to the office, have someone make detailed calculations and go back out to the customer, which would probably lead to more iterations  once they saw the cost.

But this could be handled by piecewise regression.  The difficult problem in piecewise regression is figuring out where the regression line changes, but I knew where they were:  for the above example, the jump point was at n+1.  I did a number of piecewise regressions that captured the important costs  and put it on a ….

I bet you thought I was going to say a programmable calculator.  Nope, this was 1975, and the first HP had only come out the year before.  I had never seen one and wouldn’t own one for two more years.  I’m not sure I could have gotten the formulae in the hundred line limit anyway. The idea of buying one for each salesperson and teaching them how to use them never came up.  I designed a cardboard slide rule for them.   

I found piecewise regression useful in my work.  But that summer I recognized that research in the area had sputtered out after a couple of years, so I picked another topic for my dissertation. 

Elizabeth: What did you do after your masters degree?

Craig: I worked at Bell Labs for a year, and then started my PhD in statistics at UWMadison. There were no statistics classes worth taking over the summer, so I spent all four summers working at Bell Labs. 

How was Bell Labs organized, at a high level?

I interviewed for a job at Murray Hill, where the research oriented Bell Labs work was done.  The job involved anti-ballistic missile defense and no secret details were gone into.  I didn’t get that job. I worked in a more applied location at Holmdel.

I did go to one statistical conference at Murray Hill.  The head of the statistical area there was John Tukey, a very prominent statistician.  He simultaneously worked at Bell Labs and was head of the Princeton Statistics Department.  You don’t see much of that any more.

There was a separate building in the area that did research in radio telescopes.  This was an outgrowth of research that investigated some odd radio interference with communication, that turned out to be astronomical.  I was never in that building.

However, Bell Labs didn’t skimp on the facilities at Holmdel.  It had an excellent library with everything I needed in the way of statistics and applied math.  The computer facilities were also first-rate, comparable to that at the University of Wisconsin where I got my PhD.  

Holmdel worked with the operating phone companies who provided actual phone service in their own geographical areas.  People at Holmdel would sometimes take exchange jobs at operating companies to better understand their problems.  One of these came back from a stint in New York City and gave a talk where he showed a slide of people pushing a refrigerator out of an upper story window of a derelict building while a New York Tel crew was working underneath them.

A more common problem was that by the time I was there, technicians were not as bright as they had been.  A bright person who could not afford to go to college or maybe even finish high school in 1940 and had become a technician in an operating phone company had kids who could go to college, become engineers and be about to start work at Bell Labs in 1970.

How was management structured?

My recollection was that a first line-manager had a mean of 8 or 9 people.  This varied over time as projects waxed and waned.  I have a feeling that new first-line managers had fewer people but I don’t ever recall hearing that officially.  

There was a different attitude about people (or maybe it was a different time).  My boss at Bell Labs had told them he was resigning to work at another company.  An executive vice president came to visit him, said he had a bright future at Bell Labs and suggested he’d wait a while.  He decided to and was soon promoted.  

Feedback was mostly given by yearly performance appraisals, as it was at all the companies I worked for.  Occasionally you’d get informal feedback, usually when some client was unhappy. 

Bell Labs was big on degrees.  To be a Member of Technical Staff you had to have electrical engineering classes and a Masters degree or on a path to get one. They were willing to pay for this.

What were the hours like?

For me it was a regular 9 to 5 job.  I assume managers worked longer and more irregular hours but no one ever asked me to work late (I would have done if they’d asked).  The only time I can remember not showing up at 9 was when I got in really late from a business trip the night before.  

There was a story I heard while I was at Bell Labs which I have no idea is true.  Walter Shewhart worked at Bell Labs.  In 1956, he was 65, and under the law at the time, had to retire.  The story goes that they let him keep an office, in case he wanted to stop by.   Instead, he kept showing up at 9 and working until 5 every weekday.  Eventually, they took the office away from him. 

Who decided what you worked on? What was the process for that?

To be honest, I didn’t think much about that.  I got my jobs from my first line manager.  I kept the same one for my entire time at Bell Labs; I don’t think that was common.  You may have noticed that I did a lot of  work in the queuing and inventory area; my Master’s thesis was in that area and I’m guessing that my boss saw I was good at it and steered those kind of jobs to me.  With my last task, getting a rough pricing approximation for PBX’s, I was handed the job, saw that piecewise regression was a great solution, talked to my boss about it and he let me do it that way. I don’t know how jobs got steered to him.

What was the distributions of backgrounds at Bell Labs? 

I went with to Cornell for One-Year-On-Campus. Of the 5 people in my cohort: I was from Brown, one from Cornell, one from University of Connecticut and one from Indiana.  So I’d say they were from at least good schools, so that the Labs would be sure they would be able to compete at Cornell.

Not everybody at the Labs came from elite schools.  As the most junior member of the unit, who knew less about phones then anybody else, I didn’t enquire about their resumes.   I was berated by one of members of my group for using meters for a wavelength in a meeting instead of “American units”.  He had a second part-time job as a stock-car racer, but while I was there he decided to quit after his car was broken in half in a crash.   Another man in my group had a part-time job as a photographer.  When I came back from Cornell for my Christmas check-in at Bell Labs, he was dead in a train “accident”.  Local consensus was that he had been working on a divorce case and got pushed in front of a train

My impression was that Bell Labs didn’t poach much from other technical companies.   They wanted to hire people out of school and model them their own way.  

Since the One-Year-On-Campus people were sharp and had Master’s, a lot of them got poached by other companies.  Of the five people I kept track of, all five had left the Labs within five or six years.  

As to age distribution, there were a considerable number of young people, from which there was considerable shrinkage year to year.  After five to 10 years, people had settled in and there was less attrition.  They were good jobs.  Although not as numerous (I think because the Labs had expanded), there were a number of people who had been there for decades.  

How independent was your work?

I did work with that Ph. D. mathematician on a queuing problem.

I can’t believe that they let me work on my own project in the two months between when I arrived at Holmdel before I left for Cornell.  But  I don’t remember what it was.

In retrospect, I am surprised that the Labs let me interview possible hires by 1972 when I’d only been around for a year (not counting the year at school).  Admittedly, I was supposed to assess their technical competence.  I think I did a good job; I recommended not hiring someone who they hired anyway. I later worked with her and my judgement was correct.  She was gone within a year.

Tell me more about One Year on Campus

Bell Labs would pay tuition and expenses for a master’s degree along with 60% of your salary, as long as you graduated in the first year. There also was an option to stay on full salary and go to grad school part time, but I didn’t do that. You could theoretically do this for a PhD but it was much harder to get into; I only knew one person in my division who did so.

One qualification was that you had to have a year of electrical engineering (or spend a year at the Labs before going).  Fortunately, although my degree was in Applied Math, I had taken some electrical engineering as an elective. Partially out of interest, and partially because my grandfather had worked his way up to being an electrical engineer  [note from Elizabeth: this was the grandfather on the television patent]. 

An important caveat was that you need to get your degree completed in a year or you would be fired.  I never heard of this actually happening, but I was motivated.

Bell Labs would also pay for you to take classes part-time and give you a half-day off; I went to the stat department at Columbia and took my first design of experiments class there and fell in love.  

What was so loveable about experimental design?

My love affair with design of experiments started in my first class on the subject.  The professor told a story of attending at a conference luncheon at Berkley and was seated between two Nobel laureates in physics.  One of them politely asked him what he did and the professor gave him this weighting design example.

You have a balance beam scale, where you put what you want to weight on one side and put weights on the other side until it balances.  You’re in charge of adding two chemicals C1 and C2 to a mixture. They come in packages with nominal weights, but the supplier is sloppy and the precise ratio of them is important to the quality of the mixture.  However, this is a production line and you only time to make two measurements per mixture.  What two measurements do you do? 

The obvious answer is you weigh C1 and then you weigh C2.

But this is wrong.  A better solution is to put C1 and C2 in the same pan and get their combined weight WC.  Then you put C1 in one pan and C2 in the other, and you get the difference between them, WD. Then if you add WC + WD, the weight of C2 cancels out and you get an estimate of 2*C1.  If you subtract WD from WC, the weight of C1 cancels out and you get an estimate of 2*C2.  Notice that you’ve used both weighings to determine both weights.  If you run through the math, you get the same precision as if you weighed both chemicals twice separately, which is twice the work.

The physicist got excited. The other Nobel laureate asked what they were talking about, and when he was told, said: “Why would anyone want to measure something more precisely?”.  That is the common reaction to the design of experiments.

But even more important than efficiency, designed experiments can inform about causality, which is very difficult to determine from collected observed data.  Suppose there is impurity that varies in a feedstock that is fed into a chemical reactor that lowers the quality of the output but we don’t know this.  The impurities also cause bubbles, which annoy the operator, so he/she increases the pressure to make them go away.  If we look at a plot of quality vs. pressure, it will look like quality decreases as pressure increases (when actually it has nothing to do with it; correlation does not imply causality). But if we run a designed experiment, where we tell the operator which runs are supposed to be run at high pressure and which are to be run at low pressure, we have a good shot of figuring out that pressure has nothing to with quality (the greater the number of experiments, the better the odds).  If we then talk with the operator and they explain why they increase pressure in production, we have a lead on what the real problem might be.   

What if you don’t care about efficiency or causality?  The following example is borrowed from Box, Hunter and Hunter “Statistics for Experimenters”, first edition, pp. 424ff.  A large chemical company in Great Britain makes fertilizer. Because the cost of fertilizer is low, transportation costs are a noticeable part of it, so when demand goes up, instead of adding onto a current plant, they build a standard plant at a blank spot on the map. Unfortunately, this new plant’s filtration times nearly doubles, meaning this multi-million pound plant (currency, not weight) is operating at half capacity.  Management goes nuts.  There is a very contentious meeting that comes up with 7 possible causes.  Box comes up with a first round plan to run 8 experiments.  This is the absolute minimum, since we need to estimate the mean and the seven effects.  This is important, because we’re not doing experiments in a flask, but in a factory.  Changing one factor involves putting a recycle device in and out of line, etc., so it won’t be quick.

What do you do?  The usual reaction is to do a one-at-a-time experiment, where we have a base level (the settings of the last plant previously built) and then change one factor at a time.  This is generally a bad idea and, as we shall see, this is a particularly bad idea in this case.  First, as a multiple version of the weighing design, we only use two points out of the eight to determine the importance of that factor.  And suppose we botch the base level?

Instead, Box did a fractional factorial design, with eight design points such that we code a factor levels  as 1 if it’s at the factor level of the working correctly  plant and -1 at the new plant’s settings.

Then if we add the four settings of, say factor 1, that are 1 and subtract the four that are -1, we estimate 8 times the distance between the new pant and the neutral 0 settings and and all other factors are at their neutral setting.  Similarly for all the factors.  Box used the fractional factorial design that included all old plant settings.  Its filter time was close to the old plant’s, which reassures us we have captured the important factors.  If we do the same for all factors, the magnitudes of factors 1, 3, and 5 are considerably larger than the other four.  However, chemistry is interaction and each of the large magnitude factors is confound with the two-factor of the other two large magnitude factors.  Fortunately, we can run an additional eight runs to estimate triplets of two factor interactions, because we didn’t blow our whole budget doing one-at-at-time experiments.  It turns on that the triplet that includes factor 1*factor 5 interaction has a large magnitude interaction, which could reasonably explain why the original factor 3 estimation magnitude appeared to be large.  However, management wanted to be sure and ran a 17th experiment with factor 1 (water) and factor 5 (rate of addition of caustic soda) at the old plant settings and the other 5 were left at the new settings.  The filtering time returned to the desirable level.  Notice if we had done a one-at-time experiment we would never have been able to detect the important water*(rate of addition of caustic soda).  There is a feeling that a lot of tail-chasing in industrial improvement is due to interactions not being recognized.

Another element of experimental design is blocking, where we suspect there are factors that we care about, like four different types of fertilizer and others don’t care about (say hill top land, mid-hill land and bottom land) but may effect the yield.  The solution is to block so that each of the four fertilizers gets an equal share of the three land types.  This eliminates the noise due to land type

Finally, within the limits of blocking, we wish to randomly assign treatments to factor settings.  This provides protection against factors that we don’t realize make a difference.  There was a large stage 2 cancer vaccine study which showed that the treatment group lived 13 months longer than the control group.  The only problem was that who got the treatment was not decided at random but by doctors.  It went on to a much more expensive stage 3 trial, which found no statistically significant difference between the vaccine and the control groups.  What happened?  It is surmised that since doctors can make  a good guess at your prognosis and desperately want to have another tool to fight cancer, that they unconsciously steered the less sick patients to the vaccine group.         

Thanks to my Patreon Patrons for supporting this post, and R. Craig Van Nostrand for his insider knowledge

The Biochemical Beauty of Retatrutide: How GLP-1s Actually Work

On some level, calories in calories out has to be true. But these variables are not independent. Bodies respond to exercise by getting hungry and to calorie deficit by getting tired. Even absent that, bodies know how much food they want, and if you don’t give it to them they will tell you at increasing volume until you give in (not all bodies, of course, but quiet stomachs aren’t the target market for GLP-1s). A new breed of drugs, GLP-1 agonists, offer a way out of the latter trap by telling your body you’ve eaten, even when you haven’t, but leave many people fatigued. The newest GLP-1, retatrutide, may escape that trap too, with a mechanism so beautiful I almost don’t believe it. 

How Jelly Beans Become Fat

Unfortunately in order to understand the beauty of retatrutide, you’re going to have to learn the basics of energy metabolism in the body. I’m sorry.

You have probably heard of mitochondria, the power house of the cell. What that means is mitochondria takes in sugar, protein, or (components of) fat and turns them into ATP, which is then used to power chemical reactions in your cells. This is the equivalent of a power plant that uses nuclear, coal, and hydro to power small batteries and mail them to your house. 

Sugar is a desirable fuel because it can produce ATP very quickly, and if push comes to shove, can do so without oxygen. Your body works to maintain a particular concentration of sugar in your bloodstream, so your cells can take in more when they need it. This is especially important for your brain, which runs mostly on sugar.

Fat is your body’s long-term energy storage. If you eat fat and don’t immediately burn it, it will be directly added to adipose (fat) cells. Dietary sugar you don’t use will be converted into fat and stored in the same cells. This is beneficial because fat is very space-efficient, but the process of converting sugar to fat is calorie-inefficient: you lose 10-25% of the energy in sugar in the conversion to fat (this means that how many calories you get from a jelly bean will depend on whether you burn the sugar immediately or store it as fat and burn it later)

Under the right circumstances (weasel worded because I’ve yet to find a satisfactory explanation of when this happens), fat will break down into fatty acids, which circulate like sugar until a cell draws them in to create ATP.  Breakdown of fatty acids can also produce ketone bodies, which are what powers your brain during fasts. Breaking down fat to produce ATP takes minutes.

So sugar works fast, but takes up a lot of storage space, is prone to undesirable reactions with nearby proteins, and is osmotically unstable*. Fat is space efficient and non-reactive but breaks down slowly, and frequent conversion is costly. Glycogen is somewhere in the middle- it’s a store of energy that breaks down into sugar faster than fat can produce fatty acids, but is more stable than raw sugar. If you’ve ever eaten a carb heavy meal and seen the scale go up way more than could be accounted for by calorie count, that’s the glycogen. Each gram of sugar is stored with 3-4 grams of water, so it can cause major swings in weight without touching fat cells. 

There are glycogen stores in your muscles for their personal use during intense activity. There’s also a large chunk in your liver, which is used to regulate blood sugar across your entire body. If your blood sugar is low, your liver will break down glycogen into glucose and release it into the blood, where whatever organ that needs it can grab it. If you’re familiar with “the wall” in endurance exercise: that’s your body running out of glycogen. Your second wind is fat being released in sufficient quantities. In general your body would rather use glycogen than fat, because glycogen loses almost no energy in the conversion from and to sugar and fat loses a lot. 

The Power Plant Managers

Managing these stores of energy is a complicated web of hormones. 

When your blood sugar is high, the hormone insulin is released to trigger certain cells, including muscle and fat cells, to take said sugar from the blood and use it. Type 1 diabetics don’t produce enough insulin. Type 2 diabetics produce insulin but their cells respond to it more weakly (known as insulin resistance). 

When your blood sugar is low, the hormone glucagon triggers your liver to break down glycogen to release sugar, raising your blood sugar, suppressing insulin, and giving you more energy. It more weakly triggers the breakdown of fat. release. Glucagon also triggers the release of the hormone cortisol.

Cortisol gets a bad name as the stress hormone, but the only thing worse than stress with high cortisol is stress with low cortisol. If you stumble along a tiger in the jungle, you want cortisol. It also increases blood sugar and energy levels (to provide energy to escape the tiger). Energy for running sounds good for weight loss but empirically cortisol promotes fat storage and muscle breakdown, and increases insulin resistance. This may be why raising glucagon alone does not cause weight loss. 

Glucagon-like peptide 1, or GLP-1 is one of the hormones that tells your brain “I’m eating food”. It is triggered by the presence of calories in the gut, bile in the stomach, or even the knowledge that you’re about to eat. It suppresses appetite and glucagon (preventing the breakdown of glycogen), increases insulin (and thus sugar uptake into cells), and slows down the movement of food through your intestines. 

The hormone glucose-dependent insulinotropic polypeptide (abbreviated GIP for historical reasons) is also triggered by calories in the gut. It encourages insulin sensitivity (meaning a given molecule of insulin will cause a cell to uptake more sugar) and fat storage. 

I used the phrase “hormone X does Y” a lot, but it’s kind of misleading. Hormones are more or less arbitrary molecules, their shape doesn’t mean anything, just like the word “toast” doesn’t inherently mean “bread exposed to high, dry heat” or “raise a glass to”. Hormones’ meaning comes from the receptors they activate.  Hormone receptors are molecules that straddle the membranes of cells. 

The “outside” end of a receptor waits to be activated by a hormone molecule. When it does, the “inside” end of the receptor does… something. That something can depend on the activating molecule, the cell type, conditions inside the cell, phase of the moon…

[adapted from]

Hormones are often described as a “lock and key” model. The problem is that locks and keys are precision instruments.

[adapted from]

….whereas hormones and receptors are blobs. Some blobs don’t fit together at all, some fit as well as a key in a lock (strong affinity), and some fit together like puzzle pieces that don’t quite interlock, but are close enough  (weak affinity). Receptors are much less specific than locks, and don’t have a 1:1 relationship with hormones even when they are named after one. E.g. GLP-1 Receptor (GLP1R) has strong affinity for GLP1 but also weak affinity for glucagon, because their blob shapes are close enough to each other. 

[glucagon (red) and glucagon receptor (blue)] [adapted from]

I bring this up because some drugs referred to as GLP-1s hit more than one receptor, and this is important for understanding GLP-1s. 

How do GLP-1 Medications Work?

So GLP-1 the peptide hormone works by activating receptors that tell your brain you’ve eaten and don’t need more food. How do GLP-1s, the class of medication, work?

Semaglutide (aka Ozempic and Wegovy) activates only GLP1Receptor. We’ve covered why that helps, but often comes at the cost of fatigue.

Tirzepatide (Zepbound) activates GLP1R and GIPR, and no one is sure why the latter helps but it seems to.

Retatrutide (no retail name) activates GLP1R, GIPR, and glucagon receptor. The glucagon receptors encourage the breakdown of glycogen and fat, which your body will use as energy. You might hope this would cause weight loss on its own, but in practice it doesn’t. Even if it did, permanently elevated glucagon would raise blood sugar to undesirable levels for undesirable periods of time. But GLP-1 is great at managing blood sugar. If only there was a way to keep it from making you tired… 

So glucagon’s and GLP-1’s positive effects (burn more energy/eat less food) are synergistic, but their negative effects (elevated blood sugar/fatigue) cancel out. It’s elegant at a level rarely seen in biochemistry.

Just taking these hormones won’t help much, because all three have a half-life of less than 10 minutes. You’d need to be on a 24/7 IV infusion for them to maintain levels long enough to be useful. 

This is where big pharma pulls its weight. All three medications feature minor edits to the chemical structure of the hormone that don’t affect its work as a key but do slow your body’s ability to digest it (which they can get away with because key fit is fuzzy, not precise). Tirzepatide and retatrutide are further modified to fit the extra receptor(s) they target. This is easier because all three of GLP-1, glucagon, and GIP are peptide hormones, meaning they’re made up of amino acids, and it’s easy to substitute one amino acid for another (well, easy compared to modifying other kinds of hormones).

Then chemists attach that altered peptide hormone molecule to a chain of fatty acids. The acids are slowly picked off over days: when the last one is removed the remaining molecule briefly fits into its locks/receptors, before being digested (but not as quickly as if it were the unmodified hormone). Because this removal happens at a slow, predictable pace, it spaces out the availability of the molecule, getting you the same effect as an IV drip with a lower dosage each day. And thus fat is the instrument of its own undoing. 

The Side Effects 

Reminder that I am some lady who reads stuff on the internet and writes it down and the fact that I couldn’t find a better version of this should make everyone involved feel bad. That said.

The common side effects of all three GLP-1s are digestive distress and injection site reactions. The former makes sense- GLP-1s screw with your digestion, so you’d expect the side effects to show up there. The latter might be a combination of the volume and pH level of the injection.

Fatigue is another common side effect (it’s reported at only 7%, compared to 3% for placebo, but anecdotally seems worse). It’s unclear if this stems directly from the medication or the body’s normal protective reaction to a calorie deficit. There’s no data yet, but retatrutide’s 3rd mechanism of action (imitating glucagon) may counteract fatigue or even give people more energy (trip report from one such lucky person).

There’s no data on this either, but if GLP-1s cause fatigue due to calorie deficit, I wonder what they do to the immune system, which is among the first of your systems to suffer from energetic budget cuts. 

People who lose weight often lose muscle as well as fat. This might happen at slightly higher rates for people losing weight through GLP-1s, or they might just be selected for not exercising much. Weight lifting and protein consumption help (note that this may require planning to fit into your new, lower calorie budget).

In rodent studies, semaglutide and tirzepatide were both found to increase the rate of thyroid tumors. There’s no data on retatrutide yet but no reason to expect it to be different. It’s even less clear than usual if this rat finding will transfer to humans, because the rodents have several factors making them much more susceptible to thyroid cancer. If you have a family history of thyroid cancer or something called MEN2, GLP-1s probably aren’t for you. 

Another concern is drug interactions. GLP-1s will obviously interact with other drugs that affect blood sugar, so be cautious around that. So far as we know they don’t affect the production of liver enzymes that digest medications, which precludes a major source of drug interactions. However they will lead medication to sit in your gut longer, which might increase their effective dose. And any drug that’s highly sensitive to body weight, like warfarin or lithium, will need monitoring as you lose weight. 

Conclusion

I don’t like the idea of everyone being on a compound to mitigate a problem that modernity caused, forever, any more than anyone else does. But I’m unexpectedly impressed with the elegance of this solution (in a way I’m not for antidepressants, which have great empirical results but give us only the vaguest idea of how they work). It’s not clear this should make me feel better, but it does. 

*Osmotically unstable means that there’s a semi-permiable barrier and for some reason water will cross the barrier more in one direction that the other. In this case, the inside and outside of the cell “want” to have the same percentage sugar, but if a cell is stuffed full of sugar that will attract too much water and the cell will burst. If the cell has less sugar than the environment, it will leak and potentially dehydrate to death; this is one reason bacteria struggle to live on honey.

I take antidepressants. You’re welcome

It’s amazing how much smarter everyone else gets when I take antidepressants. 

It makes sense that the drugs work on other people, because there’s nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else’s, which is a heavy burden because some of ya’ll are terrible at life. You date the wrong people. You take several seconds longer than necessary to order at the bagel place. And you continue to have terrible opinions even after I explain the right one to you. But only when I’m depressed. When I’m not, everyone gets better at merging from two lanes to one.

This effect is not limited by the laws of causality or time. Before I restarted Wellbutrin, my partner showed me this song. 

My immediate reaction was, “This is fine, but what if it were sung in the style of Johnny Cash singing Hurt?” My partner recorded that version on GarageBand for my birthday, and I loved it, which means I was capable of enjoying things and thus not suffering from distorted cognition, just in case you were wondering. But I restarted Wellbutrin just to see what would happen, and suddenly the original recording had become the kind of song you can’t describe because you sound too sappy, so all you can say is it brings you to tears. My partner couldn’t tell the difference, so my theory is that because I was the one who took the drug to make the song better, only I remember the old, mediocre version. 

The effect extends to physical objects. As previously mentioned, I spent the first half of 2024 laid up with mold poisoning. For about half of that time, I knew the problem was under the bed* (I’d recently bought a storage bed that was completely surrounded with drawers). In that time I bought dozens of air filters, spent $4k on getting my entire house scrubbed and set up a ventilation system under my bed. I did everything except replace the mattress. This was due to the mattress being too heavy for any human being to lift and everyone was too busy to help me. 

And even if I had found mold in the mattress, what could I have done about it? The websites for mattresses and bed frames are labyrinths that require feats of strength and skill to defeat. Nor was it possible to get the mattress out of my apartment, so it would just continue leaking the spores in a slightly different place. 

Then I restarted a second antidepressant (Abilify, 2mg). The mattress was still too heavy for me, but suddenly light enough that it wasn’t an unspeakable imposition to ask my partner to flip it against the wall. And at the exact same time, the manufacturer’s website simplified itself so I could not only order a copy of my current mattress, but ask for a discount because my old one was so new (it worked! They give half off if you waive return rights). Less than a week after I started Abilify I was sleeping on a new mattress on a new frame, the old mattress and frame were at the dump, and my mold symptoms began to ease. 

Given how well they work, taking antidepressants seems downright prosocial, so why are some people reluctant to try them? Sometimes they’re concerned that antidepressants work too well and turn everyone into a happy zombie. This is based on the fallacy that antidepressants work on you rather than on your environment. The fact that everyone is suddenly better at lane merges doesn’t make me incapable of being sad about medical setbacks. If having your world-is-easy meter set two steps higher seems like a bad thing, consider that that may itself be a symptom of your world-is-easy meter being set too low. 

Pills aren’t the only way to make the outside world bend to your will, of course. Diet and exercise have a great reputation in this arena, matched only by the complete lack of effect of wishing for good diet and exercise. Luckily, one of the ways antidepressants change the environment is making weights lighter, lung capacity higher, and food take fewer steps to prepare. So if you’ve spent a few years knowing you should improve your diet and exercise routine without managing to get over the hump to actually doing it, maybe it’s time to give the everything-is-easier pill a try. Especially because the benefits extend not only to you, but to everyone on the highway with you. 

Caveats

I’ve had an unusually good experience with antidepressants and psychiatrists. The first two antidepressants I tried worked very well for me (the second one is only for when things get really bad). I didn’t have to cycle through psychiatrists much either.

The most popular antidepressants are SSRIs, which I’ve never taken. My understanding is they are less likely (and slower) to work and have a worse side-effect profile than Wellbutrin, whose dominant side effects are weight loss and increased libido (but also insomnia and a slight increase in seizure risk). I’ve heard of good reasons not to start with Wellbutrin, like a family history of seizures or being underweight, but (I AM AN INTERNET WEIRDO NOT A DOCTOR) they seem underutilized to me. 

Acknowledgements

Thanks to Patrick LaVictoire and the Roots of Progress Blog Building Fellowship for comments and suggestions. Thanks to CoFoundation and my Patreon patrons for financial support. 

*Medicine being what it is I’m still only 95% that this was the cause, and was less certain yet before I got the mattress off the frame and examined it

FTX, Golden Geese, and The Widow’s Mite

From 2019 to 2022, the cryptocurrency exchange FTX stole 8-10 billion dollars from customers. In summer 2022, FTX’s charitable arm gave me two grants totaling $33,000. By the time the theft was revealed in November 2022, I’d spent all but 20% of it. 

The remaining money isn’t mine, and I don’t want it. I would like to give this money to the FTX estate, but they are not returning my calls. If this post fails to get their attention in the next month, I will donate the money to Feeding America. In the meantime, I’d like to talk about why I made this decision, and why I think other people should do likewise.

Background

FTX was a crypto-and-derivatives exchange that billed itself as “the above board, by the book legitimate, exchange.” Several of its executives were members of Effective Altruism, a movement based on ruthlessly prioritizing donations to do the greatest good for the greatest number. EA’s presence in FTX was strong enough that FTX booked an ad campaign around CEO Sam Bankman-Fried’s intent to spend his wealth on good causes.

He is now serving a 25 year prison sentence for fraud. 

Starting in 2021, FTX began to firehose money. $93m to political causes (some of which was probably buying favorable regulation) and $190m to explicitly philanthropic ones. The donations include domains like AI safety (e.g. $5m for Ought, which aims to make humans wiser using AI), biosecurity ($10m for HelixNano, which develops vaccine and other anti-infection tech), and Effective Altruism (e.g. $15m for Longview Philanthropy, itself a fundraising org). Donations also probably went to animal welfare organizations and global development, but these were made by a different branch of the FTX Foundation and there’s no clear documentation.

Some of that philanthropic money was distributed through a regrantor program that authorized agents to make grants on their own initiative, with some but not much oversight from FTX. It funded things like the memoirs of someone who worked on Operation Warp Speed, many independent AI safety researchers, and in my case, a project to find or train new research analysts who could do work similar to mine, or assistants to help them.

After the bankruptcy, I waited to be contacted by the FTX estate asking for their money back. Under US bankruptcy law I was outside the 90-day lookback period in which clawbacks are easy, but within the two-year period where they were possible. I did receive one email claiming to be from the estate, but it had a couple of oddities suggesting “scam” so I ignored it, and never received any follow-up. In November 2024, the statute of limitations for clawbacks passed, and with it, any legal claim anyone else had on the money. 

For the next few months, I did nothing. Everyone I knew was keeping their money and seemed very confident that this was fine. And all things being equal, I like money as much as the next person.

But I couldn’t stop picturing myself trying to justify the choice to keep the money to a stranger, and those imaginary conversations left me feeling gross. None of my reasons seemed very good. When I finally entertained the world where I returned the money voluntarily, I felt hypothetically proud. So I decided to give it back, or at least away.

“Avoid tummy aches” isn’t exactly a moral principle. Avoiding my tummy aches is especially not a principle I can ask others to follow. But in the course of arguing with my friends who didn’t think I should give away the money, and trying to figure out where I should donate, I eventually figured out the rules I was implicitly following, and what I would ask of other people.

Protecting the Golden Goose

The modern, high-trust, free-market economy is a goose laying golden eggs. It has moved the subsistence poverty rate from 100% to 47%, lowered urban infant mortality from 50% to less than 1% in developed countries. It brings a king’s ransom in embroidery floss directly to my house for a fraction of an hour’s wage.

This is $30, and I’m disgusted because it’s not pre-loaded onto bobbins.

The most important thing in the world after extinction-level threats is to keep this goose happy and productive, because if the goose stops laying, then we don’t have any gold to spend on things like vaccine cold chains or cellular data networks. Every theft gives a little bit of poison to the goose. A norm that you can steal if you have a good reason will kill the goose, and then we will be back to the nasty, brutish, and short lifespans of our agricultural ancestors. This is true even if your reason is really, really, really good. 

Given how damaging theft is to the goose, it’s important to keep the incentives to steal as low as possible. One obvious way is to not let thieves keep the money. For most thieves this is enough, because having the money themselves was the whole point. But in this weird case where the theft was at least partially to fund philanthropic projects, it’s important to not spend the money on those projects. 

That ship has mostly sailed, of course. Even if I gave back/away all the money FTX gave me, I still did a bunch of work they wanted. Giving the money away doesn’t erase the work, and would violate another principle in the care and feeding of golden geese, that people get to keep what they earned.

But I only spent 80% of the money (some on my own salary, some on researchers I was trialing). The other 20% wasn’t earned by me or anyone else. I could earn it now, with a new project- my old project had wrapped up but my regrantor had given permission to redirect to anything reasonable before the bankruptcy. I have a long backlog of projects; it wouldn’t be hard to just do one and conceptually bill it to FTX. But given that I had FTX’s (indirect) blessing on arbitrary projects that means doing any of them would reward the theft. 

(If it seems crazy to you that FTX executives genuinely believed they stole for the greater good and all that altruism wasn’t just a PR stunt, keep in mind that they believed the world was at risk of total annihilation in 5-10 years due to artificial superintelligence. I also know some people who knew some people, and they’re really sure that at least some of the executives were sincere at least at the start.)

Having decided I can’t keep it, where should the money go? Obviously the best place would be the victims of FTX’s theft. The only way I know of to give to them is via the FTX estate. The estate has an email address for people who wish to voluntarily return money but I guess they’re not checking it, because I’ve been emailing them for months with no reply.

Some of you may argue that the FTX users are already going to be made whole, in fact 120% of whole, because FTX’s investments did well and the estate will be able to pay all the claims. This is technically true, but it uses the valuation of crypto assets at the time of bankruptcy. Since then bitcoin has 6xed; 20% doesn’t begin to cover the loss. It might not even cover inflation + compound interest.

My next choice was to donate to investigative journalism in crypto- if I couldn’t redress crypto theft, maybe I could prevent it. Unfortunately, there doesn’t seem to be anyone who’s good at this, still working, and will accept donations higher than $5/month. You might think, “Surely he would accept larger amounts if offered, even if he doesn’t list it on his website,” but no, my friend tried to give him money months ago, and he refused. And there was no second choice.

If I can’t give it to the victims or prevent future victims, my third choice was wherever it would do the most good, in an area FTX Foundation didn’t value (so as to not reward theft). This is hard because while FTX funded a lot of stupid things, they also covered a long list of good things. I, too, hate AI risk and deadly pandemics. After sampling a bunch of ideas, I eventually settled on Feeding America. Feeding hungry people may not have the highest possible impact, but it’s hard to argue that it’s not helping. FTX never hinted at caring much about American poverty. I don’t know anyone involved with Feeding America, so there’s no possibility of self-enrichment. And 10 years ago, I heard a great podcast on how they used free-market principles to make their food distribution vastly more efficient. 

I don’t feel amazing about this choice. I don’t think amazing was an option once the FTX estate declined my offer. But I feel good enough about this, and there’s no good way to optimize when you’re specifically trying to thwart optimizers like the FTX executives. All I can do is make sure I’m living up to my principles and make some people a little less hungry. 

The Widow’s Mite Isn’t Worth Very Much

Do I think other people are obligated to give away their FTX grants? The answer is closer to yes than no, but not without complications. 

I think people should give back/away FTX money they hadn’t already spent or earned. But I take a liberal definition of spending and earning. If I hadn’t paid my taxes on those grants at the time of the bankruptcy, I’d still consider the taxes already spent, because accepting the money committed me to paying them (although FTX told me I didn’t need to pay taxes on the grant. This is the clearest sign I received that Something Was Wrong with the FTX Foundation, and to my shame, I ignored it as standard Effective Altruism messiness). I know someone who quit their job and moved countries on the assumption that the FTX money would always be there, and while I think that was a stupid decision even absent the fraud, the cost of moving back home and reestablishing her life counts as “already spent.” She might have to give back something, but accepting the grant and assuming its good faith shouldn’t come with a bill.

But it was not random happenstance that it was easy for me to drop my FTX-funded project on a dime when scary rumors started. I work as a freelancer, sometimes balancing many projects from many clients and sometimes having none but my own (which necessitates a healthy cushion of savings). So when the word came down that FTX was at risk and the responsible thing to do was to stop spending their grants, it was just another Tuesday for me to stop their project.  To the extent that giving up this money is morally praiseworthy, I think the praise should accrue to the decisions that made giving up the money easy, rather than the actual donation.

This is not a popular belief. Most people’s view on charity is summed up by the biblical story of the widow’s mite, in which a poor widow giving up a small amount at great personal sacrifice is considered more virtuous than large donations from rich men. I can see the ways that’s appealing when trying to judge someone’s character. But even if we’re going to grade people on difficulty, we have to look further back than the last step. If the rich men worked hard and made sacrifices to achieve their wealth, and they chose to invest that money in helping others rather than yachts, that should be recognized (although of course this doesn’t justify hurting others to get that money; I’m talking only about personal sacrifice).

So I think people in my exact position have a strong obligation to give away leftover money from FTX. I think people in the related position of technically having unspent money but finding it too great a hardship to give back shouldn’t ruin their lives by doing so. But I encourage them to think about what they would need to change in their life to make ethical behavior easier

Thanks

Thanks to the Progress Studies Blog Building Initiative and everyone who argued with me for feedback on this post.

UPDATE 2025-10-17: donation sent

Ketamine part 2: What do in vitro studies tell us about safety?

Ketamine is an anesthetic with growing popularity as an antidepressant. As an antidepressant, it’s quite impressive. When it works, it’s often within hours- a huge improvement over giving a suicidal person a pill that might work 6 weeks from now. And it has a decent success rate in people who have been failed by several other antidepressants and are thus most in need of a new option. 

The initial antidepressant protocols for ketamine called for a handful of uses over a few weeks. Scott Alexander judged this safe, and I’m going to take that as a given for this post. However, new protocols are popping up for indefinite, daily use. Lots of medications are safe or worth it for eight uses, but harmful and dangerous when used chronically. Are these new continuous use protocols safe?

That’s a complicated question that will require several blog posts to cover. For this post, I focused on what academic studies of test tube neurons could tell us about cognitive damage, because I know which organ my readers care about the most. 

Reasons to doubt my results

First off, my credentials are a BA in a different part of biology and a willingness to publish. In any sane world, I would not be a competitive source of analysis. 

My conclusions are based on 6 papers studying neurons in test tubes, 1.5 of which disagreed with the others. In vitro studies have a number of limitations. At best, they test the effect on one type of cell, in a bed of the same type of cell. If any part of the effect of ketamine routes through other cells (e.g. it might hypothetically activate immune cells that damage the focal cell type), in vitro studies will miss that. It will also miss positive interactions- e.g., it looks like ketamine does stress cells out somewhat, in ways your body has protocols to handle. If this effect is dominant, in vitro would make ketamine look more harmful than it is in practice.

And of course, there’s no way to directly translate in vitro effects directly into real world effects. If ketamine costs you 0.5% of your brain cells, what would that do to you? What would 5% do? In vitro studies don’t tell us that.

All studies involved a single exposure of cells to ketamine (lasting up to 72 hours). If there are problems that come from repeated use rather than total consumption, in vitro can’t show it. However, I consider it far more likely that repeated exposures are much safer than receiving the same dose all at once.

Lastly, in vitro spares the ketamine from any processing done by the liver, which means you are testing only* ketamine and not its byproducts (with the exception of one paper, which also looked at hydronorketamine and found positive results).

[*Processing in neurons might not be literally zero, but is small enough to treat it as such for our purposes]

Tl;dr

I will describe each of the papers in detail, but let me give you the highlights.

Of 6 papers (start of paper names in parentheses):

  • 2 found neutral to positive effects at doses higher than you will ever take
    • Highest dose with no ill effect:
      • 2000uM for 24 hours (Ketamine incudes…)
      • 500uM for 24 hours (but 100uM had positive effects) (Ketamine causes…)
  • 2 found neutral to positive effects at dose you might achieve, but either didn’t test higher doses or found negative effects
    • 1 uM for 72 hours (Ketamine increases…) but 0.5uM for 24 hours was better
    • 1 uM for 24 h (nothing higher tested) ((2r,6r)…)
  • 1 found no cellular mortality from ketamine on its own, but that it mitigated the effect of certain inflammatory molecules that would otherwise kill cells (ketamine prevents…)
  • 1 found that ketamine killed cells at a dose you might take.
    • Lowest dose tested: 0.39uM for 24 hours (The effects of ketamine…). This is far less than where other papers found positive effects, and I’m not sure why. 
    • This paper goes out of its way to associate ketamine with date rape in the first paragraph, which is both irrelevant and unfounded, so maybe the authors’ have a  negative bias.
    • On the other hand, it calls cell mortality of up to 30% “relatively low toxic outcomes”, which sounds excessively positive. 

For calibration: I previously estimated that a 100mg troche leads to a median peak concentration of less than 0.46uM, and a total dose of less than 2.8uM*h (to calculate total dose for each of the above papers, just multiply the concentration given by the time given. 1 uM for 24 hours = 24uM*h).  

By positive effects, I mean one of two things: ketamine-exposed cells grew bigger and grew more connections with other cells; or, ketamine-exposed test tubes had more cells than their controls, which could mean cells multiplied, or that ketamine slowed cell death (one paper examined these separately, and the answer seemed to be “both.”). This appears to happen because ketamine stimulates multiple cell-growth triggers, such as upregulating BDNF and the mTOR pathway. 

The primary negative effect is cell death, which stems from an increase in reactive oxidative species (you know how you’re supposed to eat blueberries because they contain antioxidants? ROS is the problem antioxidants solve). Unclear if there are other pathways to damage.  

It doesn’t surprise me at all that two contradictory effects are in play at the same time. In fact, it wouldn’t surprise me if the positive effects were triggered by the negative- it’s not unusual in biology for small amounts of stress to trigger coping mechanisms that do more good than the stress did harm. For example, exercise also produces reactive oxidative species. Approximately everything real that helps with “detoxification” does so by annoying the liver into producing helpful enzymes.

The most actionable thing this post could do is give you the “safe” dose of ketamine”. Unfortunately, it’s hard to translate the research to a practical dose. There are several factors that might matter for the damage done by a drug:

  • The peak concentration (generally measured in µM = µmol/L, or ng/ml, which is uM*238)
  • The total exposure of cells to the drug, measured in concentration*time
  • How much your body is able to repair the damage from a drug. This will generally be higher when the total exposure is divvied up into many small exposures instead of one large one.
    • Alas, every study in existence uses a single large dose. But it probably doesn’t matter, because neurons in isolation are missing some antioxidative tools and the ability to replenish what they do have. So I’m left to guess how much safer divided doses are, relative to the single large dose used in every paper. 

In an ideal world, the test tube neurons would be given ketamine in a way that mimics both the peak concentration and the total exposure. No one is even trying to make this happen. For most ways you’d receive ketamine for depression, you get an immediate burst followed by a slow decline. But for in vitro work, they dump a whole bunch of ketamine in the tube and let it sit there. Without the liver to break the ketamine down, the area under the curve is basically a rectangle. This makes it absolutely impossible for a test tube to replicate both the peak concentration and the total dose a human would be exposed to. The curves just look different. 

[Statistical analysis by MS Paint]

In practice, and excluding the one negative paper, an antidepressant dose is rarely if ever going to reach the peak dose given to the test tube. You’re also not going to have anywhere near that long an exposure. My SUPER ASS PULLED, COMPLETELY UNCREDENTIALED, IN-VITRO ONLY, NOT MEDICAL ADVICE guess is that unless you are unlucky or weigh very little, a 100mg troche of ketamine will not do more oxidative damage than your body is able to repair in a few days (this excludes the risk of damage from byproducts).

There’s a separate concern about tolerance and acclimation, which none of these papers looked at, that I can’t speak to. 

Papers

Warning: most of these papers used bad statistics and they should feel bad. Generally, when you have multiple treatment arms in a study that receive the same substance at different doses, you do a dose-response analysis- in essence, looking for a function that takes the dose as an input and outputs an effect. This lets you detect trends and see significance earlier. 

What these papers did instead was treat each dose as if it were a separate substance and evaluate their effects relative to the control individually. This faces a higher burden to reach statistical significance. 

Because of this, if there’s an obvious trend in results, and the difference from control is significant at higher levels, I report each individual result as if it’s statistically significant. I haven’t redone the math and don’t know for sure what the statistical significance of the effect is. 

Ketamine Increases Proliferation of Human iPSC-Derived Neuronal Progenitor Cells via Insulin-Like Growth Factor 2 and Independent of the NMDA Receptor

Dose: 0.5uM to 10 uM

Exposure time: 24-72 hours

Interval between end of exposure and collection of final metric: 72 hours

As you can see in the bar chart, ketamine was either neutral or positive. However, less ketamine was more beneficial- there was more growth at the lowest dose of ketamine than the highest (which was indistinguishable from control), and more growth after a 24 hour exposure than a 72 hour exposure. 

To achieve the same peak dosages, you’d need ~100 – 2000mg sublingual ketamine. To achieve the same total exposure, you’d need 430 doses of 100mg ketamine at the low end (for 0.5mM for 24 hours) to 24,000 doses at the high end (10uM for 72 hours) (assuming linearity of dose and concentration, which is probably false).

The following isn’t relevant, but it is interesting: researchers also applied ketamine to some standard laboratory cells (the famous HeLa line, which is descended from cervical cancer) and found it did not speed up cell proliferation- meaning ketamine isn’t an indiscriminate growth factor, but targets certain kinds of cells, including (some?) neurons.

Ketamine Induces Toxicity in Human Neurons Differentiated from Embryonic Stem Cells via Mitochondrial Apoptosis Pathway

Dose: 20uM to 4000 uM

Exposure time: 6-24 hours

Interval between end of exposure and collection of final metric: 72 hours

Don’t let the title scare you- the toxicity was induced at truly superhuman doses (24 hours at 100x what they call the anesthetic dose, which is already 3x what another paper considered to be the anesthetic dose)

Calibration: the lowest dose of ketamine (20 uM) given corresponds to 40x my estimate for median peak concentration after a 100mg troche. 

Chart crimes: that X-axis is neither linear nor any other sensible function. 

Chart crimes aside, I’m in love with this graph. By testing a multitude of doses at three different lengths of exposure, it demonstrates 6 things:

  • Total exposure matters- a constant concentration of 300uM showed negative effects at 12 hours but not 6. 
  • After a threshold is reached, the negative effect of total dose is linear or mildly sublinear. 
  • Peak dosage matters separate from total dose. If that weren’t true, doubling the dose would halve the time it took to show toxicity. 
  • Doubling exposure time is roughly equivalent to increasing dosage by 500uM
  • At lower doses, longer exposure is correlated with greater cell survival/proliferation. But at higher doses, longer exposure is correlated with lower viability. 
  • The lowest total exposure required to see negative effects is 21,000uM*h, which would require ~17,500 100mg troches- or one every day for 48 years. Before accounting for any repair mechanisms.  

The paper spends a great deal of time asking why ketamine is toxic at high doses, focusing on reactive oxidative species (ROS) (the thing that blueberries fight). This suggests that your body’s antioxidant system likely reduces the damage compared to what we see in test tubes. Unfortunately I don’t know how to translate the dosage of Trolox, their anti-oxidant, to blueberries.

(2R,6R)-Hydroxynorketamine promotes dendrite outgrowth in human inducible pluripotent stem cell-derived neurons through AMPA receptor with timing and exposure compatible with ketamine infusion pharmacokinetics in humans

Dose: 1 uM

Exposure time: 1-6 hours

Interval between end of exposure and collection of final metric: 60 days (!)

Outcome: synaptic growth

Most of the papers I looked at created their neurons from a stem cell line and then briefly ages This paper stands out for aging the cells for a full 60 days before exposing them to 1 uM ketamine (they also tried hydronorketamine, a byproduct of ketamine metabolism. I’ll be ignoring this, but if you see “HNK” on the graph, that’s what it means).

Here we see that ketamine and its derivative significantly increased the number and length of dendrites (the connections between neurons). It’s possible to have too much of this, but in the moderate amounts shown this should lead to an easier time learning and updating patterns. 

Ketamine Causes Mitochondrial Dysfunction in Human Induced Pluripotent Stem Cell-Derived Neurons

Dose: 20-500 uM

Exposure time: 6-24 hours

Interval between end of exposure and collection of final metric: 0 hours?

This is another paper with a scary title that is not born out by its graphs. It did find evidence of cellular stress (although this is only unequivocal at higher doses), but cell viability was actually higher at lower doses and unchanged at higher doses, and by lower dose I mean “still way more than you’re going to take for depression”

20uM = a higher peak than you will ever experience even under anaesthetic. 20uM for 6 hours has the same cumulative exposure of 42 100mg troches. 100uM for 24 hours has the same cumulative exposure as 142 100mg troches (for the median person)

Capsace 3/7 and ROS luminescence are both measures of oxidative stress (the thing blueberries fight). Cell viability is what it sounds like. There was no statistically significant difference in viability, and eyeballing the graph it looks viability increases with dosage until at least 100uM.

Ketamine Prevents Inflammation-Induced Reduction of Human Hippocampal Neurogenesis via Inhibiting the Production of Neurotoxic Metabolites of the Kynurenine Pathway

Dose: 0.4 uM

Exposure time: 72 hours

Interval between end of exposure and collection of final metric: 7 days

Exposed cells to neurotoxic cytokines, two forms of ketamine, and two other antidepressants, alone and in combination. The dose of ketamine was 400nM or 0.4uM, which is about 1 100mg sublingual troche.

These graphs are not great. 

DCX is a signal of new cell growth (good), CC3 is a sign of cell death (bad), Map2 is a marker for mature neurons (good). In general, whatever the change is between the control and IL-* (the second entry on the X axis) is bad, and you want treatments to go the other way. And what we see is the ketamine treatments are about equivalent to the control. .

The effects of ketamine on viability, primary DNA damage, and oxidative stress parameters in HepG2 and SH-SY5Y cells (the negative one)

Dose: 0.39-100 uM

Exposure time: 24 hours

Interval between end of exposure and collection of final metric: 0 hours?

(Pink cells are neurons derived from a neurological cancer cell line, Blue cells are liver cells derived from a liver cancer cell line. The red boxes correspond to their estimate of the painkilling, anesthetic, and drug abuse levels of concentration. All conditions were exposed for 24 hours. & means P<0.05; #, P<0.01; $, P<0.001; *, P<0.0001)

This paper found a 20% drop in neuron viability for anesthetic doses of ketamine, and 5% for a painkilling dose (and a milder loss to liver cells). This is compared to an untreated control that lost <1% of cells. They describe this result as “low cytotoxicity” for ketamine. I am confused by this and wonder if they had some pressure to come up with a positive result. On the other hand, the paper’s opening paragraph contains an out-of-left-field accusation that ketamine is a common ingredient in date rape pills, which is irrelevant, makes no sense, and is given no passable justification*, which makes me think at least one author thinks poorly of the chemical. So perhaps I’m merely showing my lack of subject matter expertise, and 20% losses in vitro don’t indicate anything worrisome. 

[*They do give a citation, but neither that paper nor the ones it cites offer any reason to believe ketamine is frequently used in sexual assault, just passing mentions. In the anti column we have the facts that ketamine tastes terrible and is poorly absorbed orally, requiring large doses to incapacitate someone. It’s a bad choice to give surreptitiously. Although I wouldn’t doubt that taking ketamine voluntarily makes one more vulnerable.]

Why In Vitro Studies?

Given all their limitations, why did I focus exclusively on in vitro studies? Well, when it comes to studying drug use, you have four choices:

  1. Humans given a controlled dose in a laboratory.
    1. All the studies I found were short-term, often only single use, and measured brain damage with cognitive tests. Combine with a small sample size and you have a measurement apparatus that can only catch very large problems. It’s nice to know those don’t happen (quickly), but I also care about small problems that accumulate over time.
  2. Humans who took large amounts of a substance for a long time on their own initiative.
    1. This group starts out suffering from selection effects and then compounds the problem by applying their can-do attitude towards a wide variety of recreational pharmaceuticals, making it impossible to untangle what damage came from ketamine vs opiates. In the particular case of ketamine, they also do a lot of ketamine, as much as 1-4 grams per day (crudely equivalent to 11-44 uM*h if they take it nasally, and 2-4x that if injected)
    2. I initially thought that contamination of street drugs with harmful substances would also be a big deal. However, a friend directed me to DrugsData.org, a service that until 2024 accepted samples of street drugs and published their chemical makeup. Ketamine was occasionally used as an adulterant, but substances sold under the name ketamine rarely contained much beyond actual ketamine and its byproducts.
  3. Animal studies. I initially dismissed these after I learned that ketamine was not used in isolation in rats and mice (the subjects of almost every animal paper), only in combination with other drugs. However, while writing this up, I learned that this may be due to a difference in use case, rather than a difference in response to ketamine. But when I looked at the available literature there were 6 papers, every one of which gave the rats at least an order of magnitude more ketamine than a person would ever take for depression.
    1. For the curious, here’s why ketamine usage differs in animals and humans:
      1. Ketamine is cheap, both as a substance and because it doesn’t require an anesthesiologist. In some jurisdictions, it doesn’t even require a doctor. Animal work is more cost-conscious and less outcome-conscious, so it’s tilted towards the cheaper anesthetic. 
      2. New anesthetics for humans aren’t necessarily tested in animals, so veterinarians have fewer options. 
      3. Doctors are very concerned that patients hate their experience not get addicted to medications, and ketamine can be enjoyable (although not physically addictive). Veterinarians are secure that even if your cat trips balls and spends the next six months jonesing, she will not have the power to do anything about it.

        1. Pictured: a cat whose dealer won’t return her texts.
      4. ketamine is rare in that it acts as an anesthetic but not a muscle relaxant. For most surgeries, you want relaxed muscles, so you either combine the ketamine with a muscle relaxant or use another drug entirely. However there are some patients where relaxing muscles is dangerous (generally those with impaired breathing or blood pressure), in which case ketamine is your best option. 
      5. Ketamine is unusually well suited for emergency use, because it acts quickly, doesn’t require an anesthesiologist, and can be delivered via shot as well as IV. In those emergencies, you’re not worried about what it can’t do. 
      6. All this adds up to a very different usage profile for ketamine for animals vs. humans.

Conclusion

My goal was specifically to examine chronic, low-dose usage of ketamine. Instead, I followed the streetlight effect to one-off prolonged megadoses. I’m delighted at how safe ketamine looks in vitro, but of course, that’s not even in mice. 

Thanks

Thanks to the Progress Studies Blog Building Initiative and everyone I’ve talked to for the last three months for feedback on this post. Thanks to my Patreon supporters and the CoFoundation Fellowship for their financial support of my work

This is time consuming work, so if you’d like to support it please check out my Patreon or donate tax-deductibly through Lightcone Infrastructure (mark “for Ketamine research”, minimum donation size $100). 

Church Planting: When Venture Capital Finds Jesus

I’m going to describe a Type Of Guy starting a business, and you’re going to guess the business:

  1. The founder is very young, often under 25. 
  2. He might work alone or with a founding team, but when he tells the story of the founding it will always have him at the center.
  3. He has no credentials for this business. 
  4. This business has a grand vision, which he thinks is the most important thing in the world.
  5. This business lives and dies by its growth metrics. 
  6. 90% of attempts in this business fail, but he would never consider that those odds apply to him 
  7. He funds this business via a mix of small contributors, large networks pooling their funds, and major investors.
  8. Disagreements between founders are one of the largest contributors to failure. 
  9. Funders invest for a mix of truly caring about the business’s goal, and wanting to receive the glamor of the work without the risk.
  10. Starts a podcast advising others even as he’s failing himself.
  11. Would rather start from scratch than reform an existing institution.
  12. Oversight is minimal and exerted mostly through funding.
  13. Generally unconcerned with negative externalities
  14. Always uses the latest technology to get ahead
  15. Both funding and the job itself heavily reward charisma and narcissism.

Hint for those outside the Bay Area and Twitter: at this point you’re supposed to guess “tech start-up”

  1. Marries before he starts his business, and often has young children. 
  2. Growth metrics are the end in and of themselves, not a proxy for money.

Hint for those outside the Bay Area and twitter: this is obviously not a tech start-up.

This guy is founding an evangelical church, and I find his ecosystem fascinating. First for its stunning similarities to venture-capital-funded tech start-ups, and then for its simplicity and open-heartedness. None of the dynamics in church planting are unique or even particularly rare, but they are unobfuscated, and that makes church planting the equivalent of a large print book for the social dynamics that favor charismatic narcissists. 

My qualifications to speak on church planting are having spent six weeks listening to podcasts by and for church planters, plus a smattering of reading. I expect this is about as informative as listening to venture podcasts is to actual venture capital, which is to say it’s a great way to get a sense of how small players want to be perceived, but so-so at communicating all of what is actually happening. Religion-wise, I also raised in a mainline Protestant denomination, although I left as a teenager. My qualifications to speak on tech start-ups are living in the Bay Area and being on Twitter.*

[*I’ve also been an employee at two start-ups, have angel investor friends, and some of my favorite clients are founders looking for their next thing. But I assure you, going to parties in the bay is sufficient.]

What is Church Planting?

Evangelical Christians are in a bind: they believe that introducing heathens to Jesus is the most important thing they can possibly do, but are fundamentally opposed to the kind of structure that Catholics and mainline Protestant denominations use to support missionaries. Missionaries have a hierarchy they answer to, and one of the things I’ve come to respect about evangelicals is how little use they have for hierarchies and credentials. 

How do you spread the Word when you can’t order someone to do it? You decentralize. Church planting is a do-acracy, where young men decide that God has called them to lead a church, and a decentralized network of financiers fund whoever they choose. This man, and perhaps an assisting team, builds a church from the ground up, answering to no one but his funders. 

The Planters

We’ve already covered many traits of a planter, but let me give a few more:

  • If under 30, he spontaneously came to the belief that God called him to plant a church.
    • A much smaller number of 30- and 40-somethings are pushed into church planting by their pastor
  • He does not have a seminary degree but if he was Saved by high school he probably went to a bible college (which may or may not be accredited by a secular assessor). Evangelicals do not have much use for credentials. 
  • He has probably trained as a youth pastor at an existing church, but it’s also possible he was an assistant pastor or is starting cold. 
  • He does not consider himself affiliated with any denomination, although in practice there are common theological strains among the “non-denominationals.” If you ask a church planter for his beliefs he will say “I teach the bible.” 
  • He may hold a traditional job as well as pastoring, especially early in the process.

Why do I say that church planters tend towards charismatic narcissism? 

First, charisma is a bona fide requirement for being a pastor, especially an evangelical pastor who needs to recruit a new flock. This is doubly true for “parachute plants,” in which a planter moves to a new-to-them area and starts a church, knowing no one beyond their support team. Funders would be stupid not to select for the ability to make people like you and support your goals, and if they somehow were that stupid, the uncharismatic ones would lose in the marketplace (another way in which planting culture embodies American virtues: their embrace of creative destruction). Similarly, VCs like founders who are the subject of positive pieces in trade journals, not because they think those articles have any factual content, but because the skills to get those articles written about oneself have other uses. 

Second, planters are selected for a lack of self-doubt. It takes a special kind of 24-year-old to think, “Hundreds of people should pay me for my advice on the most important topic in existence every week,” and those who do will trend towards narcissism. 

Third, the job is more rewarding and less taxing for narcissists (and extroverts). Lead pastor is an incredibly social job, requiring numerous 1:1 interactions and performing in front of a hopefully large group. 

When I first started this investigation, I expected this push towards charisma and narcissism to be countered by the demand that church planters have strong moral character. Surely planters were selected by wise elders, who’d known them for years and seen them be noble under difficult circumstances. And many places do at least pay lip service to that ideal. But the very first checklist I found for assessing church planters had a noticeable absence of demand for even self-assessed character, much less an appraiser with deep knowledge of the potential founder. 80% of the questions focused on ability to conceive a grand vision and get people to go along with it.

This deeply violates my sense of what organized religion should be, and my lack of participation in organized religion in no way lessens my feelings of entitlement to see it done the way I want. But the fact that anyone can be a pastor is another facet of evangelicalism that accords with the virtues of America. No one can tell you you can’t found a Bible-teaching church. They can decline to fund you or attend, or if they really hate you perhaps write some mean things in a newsletter. But if one funder declines, you can always try another, and another. You just have to believe in your grand vision hard enough (which will select for narcissists).

The Goals

The goal of a church planter (and their funders) is to introduce more people to Jesus. I use the word “introduce” quite literally here; it’s much more like trying to introduce two friends at a party and get them to shake hands than trying to get a friend to read a life-changing book, or introducing them to the ineffable presence of my childhood church. This is one of the biggest differences between evangelical and mainline Protestants- they both talk about both Jesus and God, but for evangelicals the emphasis is on the former, and for mainlines the latter.

Church planters take their goal of Jesus handshakes very, very seriously, considering it the most important biblical commandment. This makes a ton of sense if you accept their belief that the handshake is the difference between eternity in hell and eternity in heaven. Given the importance of saving souls, merely founding and growing a church isn’t enough; You need to grow large and plant churches that themselves grow large and plant more churches. You need to be disciplemaxxing at all times. Leaderboards track the churches that are the largest and fastest-growing (baptisms is another area of competition, although I didn’t find a leaderboard for it).

This philosophy bugged me a lot because why is a handshake (or as they would put it, knowing Jesus Christ and accepting him as your savior) sufficient? How do you know someone really accepted Christ and isn’t just saying it? What if they do it wrong? Protestants don’t believe in salvation through works, so you can’t even use their behavior as a check. And what if a bad Christian nonetheless recruits more people to the Jesus party? Their recruits will never even have a chance at doing it right.

An ex-evangelical friend explained the reasoning as follows: as long as someone is coming to the party and shaking hands with Jesus, there’s a chance for them to get a handshake firm enough to accept him into their heart. But if they’re not even attempting to live as a Christian, Jesus can’t make inroads. The role of a pastor is to enable Jesus to take as many shots on goal as possible. Which again, makes sense once you accept their premises.

The Funders

Lots of charismatic narcissists and young idiots have grand visions for themselves, but only certain ecosystems systematically support those dreams. If we want to understand church planting and environments like it, we need to look at the people who are actually making it happen, i.e., the funders. 

 We’re talking about nondenominational churches, which leaves four sources of funding:

  1. Existing megachurches that devote a portion of their budget to funding church planting. These are the Saudis of the church planting world. They often require the planted church to tithe back to them. 
  2. Sending networks, which pool money from many churches to support planters. These are equivalent to VC firms or investment syndicates. They also often require planted churches to tithe back (same source)
    1. E.g. Acts29, Send Network, Grace Global Network, Grace Church Network, Grace Network (Canada).
  3. A friends-and-family round, where people you know donate significantly.
  4. The Patreon model, where hundreds of people or churches give small amounts. 

Like venture capital, funding from 1+2 is often pledged and released in stages based on meeting milestones. Milestones might be acquiring a new space, attendance, or finding additional funders. They will often have some ideological requirements, like complementarianism (men and women are spiritually equal but called to different roles) or cessationism (the belief that the Holy Spirit no longer enables humans to do miracles).

Churches and sending networks will often provide other support along with their funding, more like incubators than traditional VC. This can include classes, apprenticeships, support groups, and the same for the wife you definitely already have (being a planter’s wife sounds like all of the downside with none of the upside, more on this later).

Funding can be anything from six months of partial expenses to fully covering four years of expenses- but very rarely go beyond four years. At four years you are expected to be self-sufficient and ideally have started nurturing your daughter church plants (which every planter lists as their goal), because if you don’t do it by year five you never will. 

Much like venture capital, church planting is a hits-based business. Funders expect most of their plants to fail, and of those that succeed, they expect most successes to be modest. You make your investment back on the 1 in 100 founding that becomes a unicorn (or mega church). However success rates vary by funder; one church claimed 14/14 successes for their high-touch spawning process, and people on the patreon model are most likely to fail. 

The Human Cost

The worst case scenario for a church plant is something like Mars Hill Church, where a pastor built a successful megachurch with a tightknit community, only to abuse his authority and destroy the church*. At best, this cost members their spiritual home and a community they had come to count on. At worst, they were so badly traumatized they could no longer have a relationship with God. This doesn’t seem that surprising when you’re led by a 25 year old who (untruthfully) brags that he went from heathen to intended pastor with no stops inbetween. 

[*Mars Hill was funded via a friends and family round but received substantial advice and encouragement from the Acts29 network, so I think it’s fair to use it to assess the judgement of the decentralized leadership]

Similarly, I hate how little Silicon Valley pays attention to externalities. I don’t mean the creative destruction via things like Waymo replacing drivers, I mean advice like “advertise two features and implement the one that more people click on,” or “build your fintech business on sex workers and then kick them out once you’re big enough.” Users’ time and energy are treated as free goods. The benefits to users might sometimes outweigh the costs, but I never get the sense anyone is doing that math.

The Life Cycle

Once a man has decided to plant a church, a common starting point is hosting a bible study in his home, but some plants skip this step and go straight to Sunday services. The first step to holding Sunday Services is to find a location. My sense is that if you get a bunch of early-stage pastors together, this is what they complain about. You want somewhere that’s available at prime church time, has seating and A/C, feels like a church, and costs little. The dream is finding a 7th Day Adventist church (who hold their services on Saturday). Movie theaters are not common but pastors who use them seem happier than those in school gyms and hotel conference centers, because it spares them two hours setting up speakers and folding chairs.

The standard advice is to start with “preview services” to draw some interest locally and work out the bugs. Then you do an official launch service that will draw lots of people, mostly existing Christians and your supportive friends. You’re considered successful if your regular attendance reaches half of your launch attendance.

As your church grows you need additional rooms for nursery and Sunday School. If your existing space doesn’t have convenient small rooms, you’ll need to move. In fact you’ll often have to do this anyway as you gain followers. Churches go through several moves as they grow, hermit-crab style. 

Unless you started with a too-big space, you will probably hermit crab your way through larger and larger gymnasiums until a nearby church fails, at which point you merge with them or buy their building. Buying the building is preferable; mergers saddle you with a bunch of people who aren’t bought into your cult of personality. The most successful churches will go on to build their own giant buildings. 

Two hundred regular attendees is a big milestone for planted churches. I first heard it mentioned merely as a size few churches get past, but it’s also a financial threshold. At 200 people, the variation evens out so you can have a longer planning horizon, and probably afford a backup pastor.

Every church planter at least pays lip service to the goal of planting more churches. That requires rapid buildup. I’ve varyingly heard that if you don’t support a new church plant in the first 5, 3, or 1.5 years, you never will. 

Past 200 attendees I know less, because there aren’t that many megachurch planters going on these podcasts. However, you do eventually achieve the biggest crab shell possible, or just put down enough roots that you can’t transition again. If you attract any more people after that you start streaming your sermons to other rooms on the same property. Eventually (2,000 people?), you begin streaming to off-site locations, which is known as being a multi-site church (more recently, you’ll also start streaming online).

Multisite churches are something of a micro-denomination, where an existing church will create a new physical location that is still considered part of the original church, with the same lead pastor. Generally most of its sermons are piped in from the original church (Evangelicals are on the forefront of using new tech in service of God). It will have at least one site-lead pastor. Initially, I assumed this pastor did the work of local ministering- couples counseling, running food banks, etc. But these things aren’t emphasized much at evangelical churches, so I’m not quite sure what fills their days.

A minority of pastors are too entrepreneurial and will leave their settled church to plant a new one, but it seems far more common for the founders to stay on indefinitely.

The Theology

You’ll notice I didn’t mention theology beyond recruitment, or what people do after shaking Jesus’s hand. That’s because independent pastors and even many denominations rarely discuss this. The dominant attitude (going back to at least the 1850s) is that they don’t want to let petty disagreements about the nature of God and the Church disrupt the vital business of throwing parties where people can meet Jesus. 

The passphrase for this is “I teach the Bible.” That sounds neutral but since everyone has a frame and everyone injects that frame into their teaching, what it actually means is “My interpretation of the Bible is so obvious it hasn’t occurred to me people could draw other conclusions” This annoys everyone (exact episode lost) who both teaches from the Bible and recognizes that neutrality is not a human possibility. But it successfully functions as a passphrase for people who have agreed they’re on the same team.

The Failures

This section is weaker because failed planters rarely go on podcasts. That said…

The goal of church planters is to bring people to Jesus. As a whole, evangelicalism is not growing faster than the population, so seems like the system is failing by their lights. 

Individual planters have a failure rate of somewhere between 30% and 90%, depending on their support levels and how you define “attempt”- right in the range of tech start-ups.

For systemic data on why churches fail I rely heavily on this survey by Dan Steel of struggling (not necessarily failed) church plants. The top issues he found:

  1. “No-one is fully rounded when it comes to gifting” aka “skill issue” (75%)
  2. “Not getting what we want”, which seems to be either skill issues in disguise (insufficient attendance) or external shocks like pastoral illness or suddenly losing a location (65%).
  3. “Disunity when you’re fragile is costly”, which is any conflict between the pastor and other people. This got a boost from covid-19, where fights about meetings and masks were common and costly (63%).
  4. Pastor character issues, which they define as getting the job done at the expense of other people (45%).
  5. “Naiveté and over-optimism regarding the speed of growth”, which is a mix of skill issues and bad expectation setting (23%).

A guest on New Churches Podcast gives the following reasons, in unquantified order of importance:

  1. Pastor isolation, which I think is code for discouragement or running out of money. I don’t get the sense planters are leaving thriving churches because they feel isolated; when successful planters feel bad it’s called burnout.
  2. Conflict between the founding team. This surprised me because it very rarely comes up in interviews. From what I’d heard, founding teams aren’t important enough for conflict with them to matter. So either I’m listening to a subset of people without this problem, or they’re hiding it.
  3. Skill issues and a lack of self-awareness around skill issues.

I didn’t bother looking up numbers for why start-ups fail but from party chatter the list is pretty similar. 

The Alternatives

Starting a church is a lot of work; why not just take over an existing one? The official reason is that God called them to, and that new churches are the best way to introduce more people to Jesus, which is the most important act of worship. But I can’t help but notice that for a certain personality type, planting your own church seems way more fun than stepping into an existing one, for the same reason he’d have more fun founding a start-up than being a middle manager at IBM.

When you found a church (or a company), you’re baked into it. Everyone who attends (works for) your church is there because they like you. That’s a great feeling (especially if you’re a narcissist). It also saves you a whole lot of problems with parishioners who remember how their last pastor organized Sunday School and will fight any change tooth and nail. If you join an existing church and it closes, you broke something that worked. If your church plant closes, well, planting is risky, and at least you were willing to try (a pastor’s second planting attempt is much scarier, because now if you fail it’s a pattern). 

Starting a new church is more work, of course, but lots of people would rather put the work in if they can be their own boss. 

The Attendees

Thus far, I’ve found church planting admirably consistent in its efforts to reach its stated goal (recruit people who are not in contact with Jesus and get them to shake his hand, thus saving them from eternal damnation). We’ve already looked at how the system as a whole is not growing, but there’s a subtler issue in who they aim at. 

The closer someone is to death, the closer they are to eternal damnation. So you’d think that if saving souls was your goal, you’d focus on saving the elderly. As a bonus, the old-but-not-decrepit have more money and more hours to volunteer. However, planters seem almost sneering at the elderly, calling them “white hairs” and “bald heads” who are more trouble than they’re worth. In practice, attendees tend to be within 10 years of the age of the pastor, so if a 25-year-old found your plant, it won’t attract 65-year-olds for 30 years. The sense I get is that churches and funders go after young families because they are sexy, the same way that the start-up ecosystem didn’t discover parents as a market until 10 years ago. 

Speaking of sexy: the sexiest recruits are those new to Jesus, or at least prodigal sons. If you listen to church planters talk, these people make up the majority of attendees. But given the population numbers, we know that attendance is not growing faster than the population and have a higher-than-average fertility rate, they are net losing people. They could be shedding lots of people and then recruiting some back, but based on some survey data, it seems like they’re mostly not.*

I ultimately guess that 10-40% of attendees could in some sense be considered new recruits. My sources:

  • This unsourced slideshow says that the previously unchurched make up only 10-15% of attendees at newly planted churches in Canada. 
  • A survey of white American Evangelicals say that 18% were previously unaffiliated, and 4% came from non-Christian-Evangelical backgrounds. 
  • Pew estimates that 40% of evangelicals were not raised Christian. This group includes members of non-Christian churches, but I think does not include people who were raised Christian, walked away, and returned (who church plants would recognize as a win).

It’s good for pastors that most of their flock is already on board with Jesus, because it means they’re also on board with tithing. Conventional wisdom is that it takes 4 years for the previously unchurched to contribute financially. Given that only the most generous funders supply 4 years of expenses, and some only a few months, it is absolutely imperative for pastors to attract people with an existing tithing habit. 

If a new member was already Christian, your hope is that they’re new to the city as well. “Stolen sheep,” aka people who moved to your church because they were dissatisfied with their last one, are considered a mixed blessing. They will tithe and probably volunteer, but it’s unlikely they will be long term satisfied with your church. If they were the type of person to be satisfied, they’d have been so at their last church. If you let them, they’ll suck up a bunch of your time and emotional energy on their way out, which is why one pastor suggests ignoring them. 

The Supporters

Wives

I’ve yet to hear about a church planter who wasn’t married when he founded his church. They always describe their wives as also experiencing a call from God to be a pastor’s wife, which is extremely convenient. 

By default, wives end up with whatever church work their husband doesn’t want or is bad at. This is especially likely to be work that requires high conscientiousness, involves children, or involves other women. They also need to do all the work at home that their partner isn’t doing because pastoring is sucking up all their time, or perhaps provide income because the church can’t fully support the family. And they’re doing most of the parenting. 

Pastors’ wives are expected to make friends with the women of the church but also keep their problems private, because it would undermine their husband’s job if people knew he was unreliable about taking out the trash. 

Overall church wife-ing seems like at least as much work as pastoring, with fewer rewards. 

Support Teams

Many pastors mention launching with other families from their sending church. They frequently discuss how important support teams are, but almost never what their supporters did that was so valuable. Maybe music? Surely set up and tear down. And it’s useful to have people in the pews right from the beginning so nonbelievers don’t walk into an empty church. But overall this feels like a blank spot in my knowledge because the support team never goes on podcasts and for all that pastors sing their praises, they rarely give specifics. 

I posit that pastors are performing the equivalent of thanking the little people at their Oscar speech because they know they’re supposed to, but don’t believe it in their heart of hearts that other people are very important. In contrast, you do tend to incidentally hear about the work their wives do.  

Mission Teams

You know how churches sometimes send teenagers to Mexico for a week to build houses? Well sometimes they instead send those teens to a recently planted domestic church, to ring doorbells, volunteer at vacation bible school, or do manual labor. These have only come up in one episode, which was spent complaining about how they were worse than useless (under the guise of acknowledging that the pastors didn’t know how to use them productively). The most useful function mentioned was mowing the pastor’s own lawn, to free up his time. 

Conclusion

Biology has a concept called convergent evolution– that if you put two distantly related animals in the same ecological niche, they will evolve to be more similar to each other than their respective recent ancestors. Think dolphins evolving the same fins and tail as sharks, despite having bones and needing to breathe air.  Silicon valley and church planting sure seem to me like they’ve gone through convergent evolution, but what is the ecological niche?

  1. Some people really like attention
  2. If you don’t have the energy to do the difficulty, sexy thing, you can get some reflected glory by funding it
  3. Absence of traditional gatekeeping. 
  4. In the absence of a countervailing force, charismatic people will be more successful on the margin. That’s what charisma means.
  5. If you don’t track the eggs broken in your omelette making, there’s no drive to minimize them. 
  6. Youth-worship

And when you combine those, what you get are hits-based economies and a lot of negative externalities. 

Sources

Podcasts

Inside a CATHOLIC Megachurch (Protestant Perspective) 

The Rise and Fall of Mars Hill Church (all episodes)

Everything I Did Wrong as a Church Planter: A Million Part Series (all episodes, as of 2025-06-22)

The Lutheran Church Planter (all episodes, as of 2025-06-22)

New Churches Podcast (26/236 episodes)

Canadian Church Planting (10/41 episodes)

Terminal: The Dying Church Planter (all episodes, as of 2025-06-22)

CMN Church Planting Podcast (all episodes, as of 2025-06-22

Revitalize and Replace (4/236 episodes)

Ministry Wives (1/201  episodes)

Pastors Wives Tell All (1/332 episodes)

Articles

The Priority & Practice of Evangelism: Canadian Church Leader Perspectives in 2021

Religious Change in America

Do We Really Need Another Church Plant?

Evangelism and “Nones and Dones” in Canada

Books

The Evangelicals, by Frances Fitzgerald.

Thanks

Thanks to Patrick LaVictoire and the Progress Studies Blog Building Initiative for feedback on this post. Thanks to my Patreon supporters and the CoFoundation Fellowship for their financial support of my work. 

Mirroring to Substack

I still love and treasure RSS, to the point that I pay money for inoreader. But newsletters are where it’s at right now, especially if you want to be ~~discoverable~~. To that end, I’m mirroring this blog on substack. The Substack will have a slightly more professional angle and lose the bottom ~10% of posts, but otherwise the content is identical. If you prefer email subscriptions or want to help me out with The Algorithm, that’s now available. If you prefer wordpress/RSS/Patreon, do absolutely nothing, things will continue like they always have.

Podcast: Lincoln Quirk from Wave

For two years I had the good fortune to work at Sendwave/Wave (they were one company at the time), a company that made remittances cheap and workable in certain African countries.  I am prouder of working at Wave than of the rest of my programming career combined, and a good bit of my research career as well. Every day I finished work knowing that I had helped save very poor people time and money, by making it cheaper and easier to send money home from abroad or domestically. So I was deeply excited to talk to my ex-boss and co-founder of the company, Lincoln Quirk, on this final episode of the podcast. We talk about a the details of starting a company and the economics of how remittances and mobile do enormous amounts of good while making a profit. 

This is the sixth episode of my podcast with Timothy Telleen-Lawton, and probably the last. This was a worthy experiment but ultimately not accomplishing what I’d hoped, and honestly I’m just tired of criticizing things. Timothy couldn’t make this recording, but he pops in at the end to stay goodbye. 

Transcript

Thanks to the EA Infrastructure fund, our Manifund donors, and my Patreon patrons for supporting this work.

Ketamine Part 1: Dosing

I’m currently investigating ketamine, with the goal of assessing the risks of chronic use. For reasons I will get into in the real post, this is going to rely mostly on in vitro data, at least for neural damage, which means I need a way to translate real-world dosages into the concentration of ketamine in the brain. This post gets into those details so I can use it as a reference post for the one you actually want to read, and invite correction early from the three of you who read it all the way through.

If you’re excited for the main post (or for some inexplicable reason, this post), you can help me out by tax-deductibly donating to support it, or joining my (not tax deductible) Patreon

Cliff Notes

What I care about for purposes of the next post is the concentration of ketamine in the brain. Unfortunately, ethics committees really hate when you set up a tap in people’s skulls and draw a fresh sample every five minutes. The best you can do is a lumbar puncture, which draws cerebrospinal fluid (CSF) from the spine. Unfortunately, spinal fluid and cranial fluid concentrations are not interchangeable. Tentatively, brain concentrations of most substances in this class tend to be lower, so we can still use CSF concentrations as a rough upper bound. 

Most long-term, medically supervised ketamine users use a  lozenge or nasal spray.. However, the only paper that measures ketamine in the cerebrospinal fluid delivered the ketamine by IV. Therefore, to have any hope of contextualizing the in vitro data, I need to translate from ketamine dose (nasal or lozenge) (measured in either straight milligrams, or milligrams per kilogram of body weight) -> plasma concentration -> CSF concentration (also measured in nano- or micro-grams of ketamine per milliliter of fluid). 

There are two metrics we might consider when comparing doses of drugs. The first is peak concentration, which is the highest dose of ketamine your brain experiences at any point. The second is the total cumulative exposure (AKA area under the curve or AUC). These unfortunately have pretty different translations from IV to sublingual doses, and unfortunately I saw no evidence about which of these was more important. I report both just in case. 

Assuming (incorrectly) a linear response, 1 mg of ketamine taken sublingually (under the tongue) or buccally (in the cheek) leads to a mean peak concentration in plasma (blood) of 0.83 – 2.8 ng/ml and a mean total dose (area under curve, AUC) of 1.8-7.4 ng/ml.

1 mg of ketamine taken via a nasal spray leads to a mean peak in plasma of 1.2 ng/ml and a mean total dose of 3 ng*h/ml (based on a single study).

Peak CSF concentration was 37% of plasma concentration, and came 80 minutes later. Total CSF dose was 92% of total plasma dose, indicating almost total diffusion into CSF, eventually. Both findings are from a single study.  

Combining these two results, 1mg sublingual ketamine leads to a median measured peak brain concentration of < 1.1 ng/ml and a total brain dose of < 6.7ng*h/ml. 

On the other hand, 1mg nasal ketamine leads to a median peak concentration of < 0.45 ng/ml and median measured total brain dose of < 2.7ng*h/ml.

The rest of the post goes more deeply into the findings and methodology of individual papers. 

Context and Caveats

Like most 3D molecules, ketamine exists in two forms that are mirror images of each other (called enantiomers). One version is sold under the name eskatamine; the other is not commercially available on its own). Some papers administered only esketamine, some both separately, and some both together (a racemic mix). The pharmacokinetics of these aren’t different enough to be worth distinguishing for my purposes. 

My list of papers is cribbed from Undermind.AI. I occasionally found papers via references, but when I checked those papers were always on Undermind’s list as well. I also looked on Perplexity, but it found only a subset of the papers on Undermind (Perplexity has tragically enshittified over the last few months). 

I treat the translation of dose to concentration in the body as linear. This is almost certainly false, but more likely to be an overestimate. 

I did not even attempt to combine results in some sort of weighted fashion, which would have incorrectly combined subtly different mechanisms of delivery. The numbers you see above are the range of values I saw. 

“Peak” concentration always means “among times samples were taken”, not actual peak.

I mentioned that I assume a linear dose-concentration curve to ketamine (meaning that if you double the amount of ketamine you take, you get double the plasma or CSF concentration). Linear sounds like a nice safe assumption, but it can go wrong in both directions. Your body may absorb a substance less efficiently as you take more, leading to an asymptotic curve. Or your body may only be able to clear so much of a substance at a time, so an increased dose has an outsized impact on concentration. In the case of ketamine there’s very mild evidence that the curve is sublinear, which makes treating it as linear an overestimate. That’s the direction I want to err on, so I went with linear. 

Translating nasal/sublingual doses to plasma concentration

To find out what dose translates to what plasma concentration, we need to give subjects ketamine through whatever route of administration, take repeated blood samples, and measure the concentration of ketamine in that blood. My ideal paper had the following traits:

  1. Studied adult humans.
  2. Delivered ketamine nasally, sublingually, or buccally.
  3. Sampled plasma at least every 10 minutes for the first hour, ideally more often. Only a handful of papers met this criteria, so I had to give a little on this criteria. 
  4. Tracked concentration and total dose received. 

Combining the results below, and assuming (incorrectly) a linear response I get the following results:

  •  1 mg of ketamine taken sublingually or buccally leads to a mean peak concentration in plasma (blood) of 0.83 – 2.8 ng/ml and a mean total dose (area under curve, AUC) of 1.8-7.4 ng/ml
  • 1 mg of ketamine taken nasally leads to a mean peak in plasma of 1.2 ng/ml and a mean total dose of 3 ng*h/ml (based on a single study).

S-Ketamine Oral Thin Film—Part 1: Population Pharmacokinetics of S-Ketamine, S-Norketamine and S-Hydroxynorketamine

This paper was definitely selling something and that thing is an “oral thin film” delivery mechanism. 

This study design is a little complicated. N=15 people were given one and two sublingual films (50mg S-ketamine each) on two separate occasions (so everyone received 150mg total, over two doses), and ordered not to swallow for 10 minutes (I have my doubts). Another 5 were given the same doses, but buccally (in the cheek). Buccal and sublingual had indistinguishable pharmacokinetics (at their tiny sample sizes) so we’ll treat them as interchangable from now on. Subjects had blood samples taken at t = 0 (= oral thin film placement) 5, 10, 20, 40, 60, 90, 120, 180, 240, 300, 360 minutes, after which they were given 20mg IV ketamine over 20 minutes, with new samples taken at 2, 4, 10, 15, 20, 30, 40, 60, 75, 90, and 120 min.

Figure 1. Mean measured plasma concentrations following application of the 50 and 100 mg S-ketamine oral thin film (OTF): (A) S-ketamine, (B) S-norketamine, and (C) S-hydroxynorketamine. Individual concentrations are given in panels (D–F) for the 50 mg oral thin film and (G–I) for the 100 mg oral thin film. In black the results of placement below the tongue, in red buccal placement. The OTF was administered at t = 0 min for 10 min (green bars); at t = 360 min, an intravenous dose of 20 mg S-ketamine was administered over 20 min (light orange bars).

There’s a few key points to take from this graph. First, sublingual (under the tongue) and buccal (between cheek and gum) are indistinguishable, at least at this sample size. Second, the 100mg sublingual dose doesn’t have near double peak concentration or AUC of the 50mg dose, although this is not statistically significant. You can see the exact numbers in table 1.

(CMAX = peak concentration, Tmax = time to peak concentration, S-norketamine = a psychoactive metabolite of ketamine, S-hydroxynorketamine=an inactive metabolite of ketamine)

Given that, we have, for peak concentration

 50 mg = 96ng/ml -> 1mg = 1.9ng/ml

100 mg = 144ng/ml -> 1mg = 1.4ng/ml

Which makes a linear dose-concentration relationship look unlikely, although at these sample sizes the difference isn’t significant. 

For AUC:

50mg = 8362 ng*min/ml     -> 1mg = 170 ng*min/ml

100mg = 13,347 ng*min/ml -> 1mg =  130ng*min/ml

Plasma concentration profiles of ketamine and norketamine after administration of various ketamine preparations to healthy Japanese volunteers

This is my favorite paper, looking at no fewer than 5 different methods of delivery. 

This isn’t the primary take-home, but because they injected racemic ketamine but measured the two enantiomers (S- and R- ketamine) we can see that their pharmacokinetics are close enough that I can ignore the difference between them, and use S-ketamine data to inform estimates of racemic ketamine. For the results below I averaged the S- and R- results together

The only routes of administration I care about from this list are sublingual tablet and nasal spray.

For peak concentration, we see:

50 mg sublingual = (42.6+40.4)/2 ng/ml -> 1 mg = 0.83ng/ml

25mg nasal spray = (29.4+29.3)/2 ng/ml  -> 1 mg = 1.2 ng/ml

For area under the curve, we see:

50 mg sublingual = (108.8+110.5)/2 ng*h/ml -> 1 mg = 2.2ng*h/ml = 130 ng*min/ml

25mg nasal spray = (76.8+72.7)/2 ng*h/ml -> 1 mg = 3 ng*h/ml = 180 ng*min/ml

The absolute bioavailability of racemic ketamine from a novel sublingual formulation

You know the drill: 8 subjects were given 25mg sublingual or 10mg IV ketamine

This paper uses geometric mean (the nth root of n numbers multiplied together) rather than arithmetic (sum of numbers divided by their count), so is not directly comparable to the other studies. But roughly, for peak concentration (Cmax) of the sublingual dose: 

25 mg ketamine  = 71.1 ng/ml -> 1mg = 2.8 ng/ml

And for total dose (AUC)

25 mg ketamine = 184.6 ng*h/ml -> 1 mg ketamine = 7.4 ng*h/ml = 443 ng*min/ml

Why I’m ignoring ketamine’s chirality

Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(–) Ketamine Concentrations

10 healthy male subjects aged 24 to 62 yr, weighing 68 to 92 kg, were administered approximately 7 mg of S(+) or R(–) ketamine via a 30-min constant rate IV infusion on two occasions at least 3 days apart. Radial artery and arm vein samples were drawn at 0, 5, 10, 15, 20, 25, 30, 40, 50, 60, 120, 180, and 300 min after the start of the S(+) ketamine infusion and at 0, 5, 10, 15, 20, 25, 30, 36, 43, 50, 180, 300, and 420 min after the start of the R(–) ketamine infusion.

Red = arterial blood, Blue = venous blood

As you can see, arterial and venous blood are quite different, but S- and R- ketamine are close enough for government work. 

Translating plasma concentrations to cerebrospinal fluid concentrations

Cerebrospinal fluid exploratory proteomics and ketamine metabolite pharmacokinetics in human volunteers after ketamine infusion

These heroes gave a dose of ketamine via IV, then monitored both plasma concentration and cerebrospinal fluid. 

Peak CSF concentration was 37% of plasma concentration, and came 80 minutes later.

Total CSF dose was 92% of total plasma dose, indicating almost total diffusion into CSF, eventually. 

Papers I Want to Complain About

Bioavailability, Pharmacokinetics, and Analgesic Activity of Ketamine in Humans

I mention this paper only to explain why I am mad at it. This 1981 study took beautiful measurements of pain sensitivity as well as plasma concentration of ketamine, and then didn’t publish any of them. They used only intramuscular injection and oral solution, which doesn’t allow me to translate to the more standard IV concentration. Also oral is a terrible route for ketamine, your body processes most of it before it hits your system. 

Population pharmacokinetics of S-ketamine and norketamine in healthy volunteers after intravenous and oral dosing

For reasons I don’t understand, this paper studies IV alone versus IV + oral ketamine together. Also not useful for our purposes, except for establishing that ketamine taken orally (as in, a pill you swallow and absorb through the digestive track) isn’t very good.

Development of a sublingual/oral formulation of ketamine for use in neuropathic pain

This paper measured concentration in arterial blood, where every other paper used venous blood. One paper that measured both showed that they were shockingly different. I could attempt to translate from arterial to venous concentrations, but the paper also uses an unpopular delivery mechanism so I haven’t bothered. 

Acknowledgements

Thanks to R. Craig Van Nostrand for statistical and paper-reading help, Anonymous Weirdo for many discussions on pharmacokinetics, Ozy Brennan and Justis Mills for editing, and my Patreon patrons and Timothy Telleen-Lawton for financial support. 

What do you Want out of Literature Reviews?

Tl;dr how can I improve my literature-review based posts?

I write a fair number of blog posts that present the data from scientific papers. There’s a balancing act to this- too much detail and people bounce off, too little and I’m misleading people. I don’t even think I’m on the pareto frontier of this- probably I could get better at which details I share and how I share them, to improve readability and rigor at the same time. This post is a little bit my thoughts on the matter and a lot of requests for input from readers- what do you actually want to see? What are examples of doing this well? Any requests for me personally?

I ask for audience feedback explicitly at a few points, but please don’t limit yourself to those. I’m interested in all suggestions and examples .

Context

If you’re just tuning in, here’s a few examples of posts I mean:

These are all posts where the bulk of the text is describing individual papers, but I have some conclusion I would like the reader to consider.

My motivating example is my project on the risks of long term ketamine use. Right now I’m working on a technical post on how to translate doses consumed by humans into concentrations in the cerebro spinal fluid (draft ), which is reference material for a post people might actually read.

Principles

Epistemic Legibility

My goal is always to present information to people they can interpret for themselves, rather than rely on my summaries. My proudest moment as a researcher was when I was hired by a couple to investigate a particular risk during pregnancy, and due to different risk tolerances they came to opposite conclusions from the same model. To accomplish this, I need to give people the relevant details, in as digestible a format as possible. 

What helps you connect with scientific posts? Some ideas:

  1. My search process
  2. My selection criteria
  3. My conclusions
  4. Motivation
  5. Your ideas here

Then there’s the papers themselves. For the ketamine dosing post, there’s  <20, maybe <10 papers in the world that meet my criteria for inclusion, so it’s feasible to include details on each of them. But which details help people understand, and which aren’t worth the attention they cost? 

Some paper details I could include:

  1. Sample size. 
  2. Experimental set up
  3. Key graphs
  4. Description of results
    1. Averages, or with confidence intervals?
  5. My criticisms
  6. Your ideas here

Readability

All else equal, it’s better for a post to take less energy to read than more. Actually that’s not quite true- for posts that would be especially costly if I’m wrong or I expect to be misinterpreted, I will often bury the conclusion, like I did in this post on binge drinking. But we’ll ignore that for now and focus on the much more common case of wanting posts to be as accessible as possible. 

Detail and readability often trade off against each other, but what I’m looking for here is ways to improve readability while holding detail constant. Some ideas I have:

  1. Formatting, probably? Seems like it should help but I don’t know what specifically.
  2. Humor
    1. Unfortunately the easiest way to do this is to make fun of bad studies, which gets repetitive. 
  3. Explaining relevance to the main question
  4. Make the goal/main question clear
  5. Pictures? I’m unconvinced of this
  6. Your ideas here

Audience

Everyone says to have an audience in mind. There are two major audiences and two minor.

People who are Interested in the Opinions of Uncredentialed Internet Weirdos

This is a tautology, but refers to something much more specific than it looks at first. People who are interested in hearing uncredentialed randos describe and interpret academic papers have a lot more in common than just their willingness to do that. 

Some other traits they share: 

  1. Statistical literacy
  2. Desire for interpretations to be quantified
  3. Higher risk tolerance
  4. Your ideas here
  5. Interested in the specific topic- such as ketamine use, or long covid risk.
    1. It’s rare I want to convince people that they should be in a topic when they weren’t before. 
  6. Your ideas here

Fishing for Corrections

Some posts aren’t meant to be read widely. They’re meant to be a reference in other, more readable post, and to invite corrections from the three people who will read them. This is my intention for the ketamine dosage translation post– it’ll be lucky if it’s read by 10 people when it’s first published, but one of those might be quite useful. 

The primary benefit to me is catching mistakes before I write an entire 10,000 word post with information that could hurt people I’m wrong that depends on the false conclusion. It also feels virtuous to explain my reasoning in detail, even if nothing specifically good comes from it.  

Myself

Writing lets me think through things. I always budget at least as much time for the “writing” phase of a project as research, because there are gaps I don’t notice until I start writing them down. 

I’m interested in how this works for other people- have you found ways to improve your writing for yourself?

Potential clients

I make my living as a freelance researcher, with my blog being the major evidence I am good at this. I’d like clients who read my posts to be able to assess my skill level, even if they’re not interested in the topic and have no context. 

Conclusion, such as it is

I would like to get better at writing the kind of posts I write. In particular, I’d like to get better at conveying relevant information, in ways that take as little work from the reader as possible, but no less than that. I will be very grateful for feedback that helps me improve or that helps me create a framework by which I can improve. I expect that to mostly be critical, but compliments are helpful too- I’d hate to throw out the baby with the bathwater.