Bandwidth Rules Everything Around Me: Oliver Habryka on OpenPhil and GoodVentures

In this episode of our podcast, Timothy Telleen-Lawton and I talk to Oliver Habryka of Lightcone Infrastructure about his thoughts on the Open Philanthropy Project, which he believes has become stifled by the PR demands of its primary funder, Good Ventures.

Oliver’s main claim is that around mid 2023 or early 2024, Good Ventures founder Dustin Moskovitz became more concerned about his reputation, and this put a straight jacket over what Open Phil could fund. Moreover it was not enough for a project to be good and pose low reputational risk; it had to be obviously low reputational risk, because OP employees didn’t have enough communication with Good Ventures to pitch exceptions.  According to Habryka.

That’s a big caveat. This podcast is pretty one sided, which none of us are happy about (Habryka included). We of course invited OpenPhil to send a representative to record their own episode, but they declined (they did send a written response to this episode, which is linked below and read at end of the episode). If anyone out there wants to asynchronously argue with Habryka on a separate episode, we’d love to hear from you. 

Transcript available here.

Links from the episode:

An Update From Good Ventures (note: Dustin has deleted his account and his comments are listed as anonymous, but are not the only anonymous)

CEA announcing the sale of Wytham Abbey

OpenPhli career page

Job reporting to Amy WL

Zach’s “this is false”

Luke Muelhauser on GV not funding right of center work

Will MacAskill on decentralization and EA

Alexander Berger regrets the Wytham Abbey grant

Single Chan-Zuckerberg employee demanding resignation over failure to moderate Trump posts on Facebook

Letter from 70+ CZ employees asking for more DEI within Chan Zuckerberg Initiative.

OpenPhil’s response

Austin Chen on Winning, Risk-Taking, and FTX

Timothy and I have recorded a new episode of our podcast with Austin Chen of Manifund (formerly of Manifold, behind the scenes at Manifest).

The start of the conversation was contrasting each of our North Stars- Winning (Austin), Truthseeking (me), and Flow (Timothy), but I think the actual theme might be “what is an acceptable amount of risk taking?” We eventually got into a discussion of Sam Bankman-Fried, where Austin very bravely shared his position that SBF has been unwisely demonized and should be “freed and put back to work”. He by no means convinced me or Timothy of this, but I deeply appreciate the chance for a public debate.

Episode:

Transcript (this time with filler words removed by AI)

Editing policy: we allow guests (and hosts) to redact things they said, on the theory that this is no worse than not saying them in the first place. We aspire but don’t guarantee to note serious redactions in the recording. I also edit for interest and time. 

Feedback loops for exercise (VO2Max)

The perfect exercise doesn’t exist. The good-enough exercise is anything you do regularly without injuring yourself. But maybe you want more than good enough. One place you could look for insight is studies on how 20 college sophomores responded to a particular 4 week exercise program, but you will be looking for a long time. What you really need are metrics that help you fine tune your own exercise program.

VO2max (a measure of how hard you are capable of performing cardio) is a promising metric for fine tuning your workout plan. It is meaningful (1 additional point in VO2max, which is 20 to 35% of a standard deviation in the unathletic, is correlated with 10% lower annual all-cause mortality), responsive (studies find exercise newbies can see gains in 6 weeks), and easy to approximate (using two numbers from your fitbit). 

In this post I’m going to cover the basics of VO2max, why I estimate such a high return to improvements, and what kind of exercise can raise it the fastest.

What is VO2max?

A person’s VO₂ max is the maximum volume of oxygen they can consume in one minute. Higher VO2max lets you cardio more intensely, and is correlated with better health and longer lifespan (we’ll quantify this later). This is 100% of what you need to know, the rest is thrown in for fun. 

VO2max is measured in ml O2/kg bodyweight/minute. It is sometimes given in Metabolic Equivalents (METs). 1 MET = 3.5ml O2/kg of bodyweight/minute. This is approximately your metabolic expenditure while sitting still. 

What physically causes increase in VO2max? It’s a mix of many factors:

  1. Strengthened heart allows you to pump blood faster
  2. Improved lung capacity, which breaks down to
    1. Expansion of the chest cavity, in part due to strengthening of the diaphragm and rib muscles.
    2. Recruitment of new alveoli (the features in your lungs that exchange carbon dioxide and oxygen) 

(source)

  1. Improved lung elasticity
  2. Production of a surfactant that maintains alveoli in fighting form 
  1. Increased mitochondrial activity allows cells (especially muscle cells) to use more oxygen
  2. More blood to carry the oxygen
  3. New capillaries grow to deliver more blood to your muscles

What can induce these changes? Exercise, especially high intensity interval training. We’ll talk more about that in a bit. 

Why do I care about VO2Max?

Obligatory boring part: VO2max is a crude measurement whose impact depends on many factors blah blah blah blah

Shocking headline: 1 MET (aka 3.5 points VO2max ) = 10% reduction in relative risk of all-cause mortality. So if your normal risk of death is 1%, gaining one MET would lower it to 0.9%).

The catch: that meta-analysis averaged together results from multiple studies of very different durations. “That’s okay, they could correct for that, at least crudely” you might be saying to yourself, in which case, congratulations on being better at meta-analysis than these authors, who AFAICT dumped every study into a bag and shook it. 

More realistic, yet more shocking headline: an increase of 1 point in VO2max is correlated with 10% lower annual all-cause mortality. 

This is based on the largest study in the meta-analysis, Kokkinos et al. Important facts from this study include: 

  1. In male veterans, going from low fitness to moderate fitness (defined below) lowered risk of dying by 40%. This was shockingly consistent across age groups, and whether you considered a 5 year or 10 year period. Getting to a high fitness level dropped their mortality rate by another 30%.
  2. I too am wondering why the % change in risk of death doesn’t get larger when you consider a longer period of time. 
    1. The middle (“threshold”) range for 4 age categories were 8 to 9, 7 to 8, 6 to 7, and 5 to 6 METs for <50, 50 to 59, 60 to 69, and ≥70 years, respectively. Another source gives average MET for those categories (substituting 40-50 for <50 and 70-80 for >70)  as 10, 8.6, 7.3, and 6.1, so the threshold starts 1-2 points lower than average and they converge by your 70s.
  3. Low fitness is the range between the floor of the threshold, and 2 METs lower than that, moderate fitness is the ceiling of the threshold plus 2 METs. If distribution within buckets were uniform, we could treat moving from low fitness to moderate fitness as an increase of 2 METs. If you assume a normal distribution centered around the threshold, it’s somewhat smaller than that.I went with the latter assumption, but not very rigorously.

Caveats

VO2max is measured per kg of total body weight, not lean weight. That means that if you lost 10% of your bodyweight via liposuction but otherwise stayed exactly the same, your VO2max would rise by 1/0.9. This makes VO2max a partial proxy for weight. However the relationship between weight and health, and weight and exercise, is much more complicated than is typically acknowledged. 

VO2 is also a proxy for exercise. Right now we don’t have enough information to say that increased VO2, or increased aveoli surfactant, increases lifespan or is merely downstream of exercise that does some other helpful thing. 

I’m going to ignore both of these for now, but when you’re doing your own math you should not add effects from potential weight loss, because that might be double counting.

Exercise science sucks. Lifespan is affected by 1000 different factors, none of which scientists can properly control. Lots of researchers have their bottom line already written.

While we’re at it, I should note that I haven’t done deep investigations on any other metrics. Very early in the process I considered others, and VO2max won due to a combination of being promising and easy to measure at home. I don’t have the information to say if VO2max is more or less accurate than other metrics.

How can I measure my VO2Max?

(note: this section is based primarily off of Client’s research, not mine)

The official way involves a mask and measuring equipment and 20 minutes of excruciatingly intense exercise. This is technically the most accurate, but only if it’s set up properly, and is expensive. If you’d like to trade accuracy for ease, use this formula

VO2max ≈ (HRmax/HRrest) ∗ magic_constant

If you would like to get a number without understanding it, you can enter your heart rate in this spreadsheet. If you would like to learn about the magic constant, I’ve defined the terms below. 

  • HRRest is your lowest heart rate when measured first thing in the morning, or ask your friendly neighborhood wearable. 
  • HRMax is your heart rate after exercising at ever increasing intensity until you cannot stand it. If you don’t know this, you can use 208 − 0.7 ∗ age. However if you do so you’ll miss any gains that come from increasing your maximum heart rate, which I’d expect to be at least half. 
  • magic_constant = 17.27 − 0.08 ∗ age − 0.59 ∗ BMI_category − 0.40 ∗ smoking_status + 0.14 ∗ TPA
  • BMI_category =
    • normal: 0
    • overweight: 1
    • obese: 2
  • smoking_status:
    • never: 0
    • former: 1
    • current: 2
  • TPA (total physical activity) =
    • moderate: 2 (< 43 MET hours / day)
    • active: 1 (43 – 50 MET hours / day)
    • highly active: 0 (> 50 MET hours / day)
    • This definition is circular, because MET hours is a function of hours exercised * exertion level. A decent level of physical fitness will burn 10 MET per hour of very intense exercise. 

You may be tempted to use wearable-calculated VO2Max.  This is a bad idea because your device has no way to separately track how hard you are working from how hard your heart is beating (Apple Watch attempts this, but simplifies things by assuming all exercise is running on a flat surface).

What are you aiming for?  Here is a convenient chart (source). This is measured in ml/kg/min, not METs.

How can I raise my VO2Max?

The best exercise is still the one you do consistently without injuring yourself. Optimization within that is for people who have many choices they enjoy, or who don’t enjoy any but can nonetheless force themselves to work out reliably. 

The next best exercises appear to be rich people sports (lifespan wise, you’re better off being an amateur raquetballer than an olympic marathoner, despite racquetball’s barely-above-average VO2max). I didn’t find numbers for polo players but I assume they’re stunning. We’re going to ignore these findings even though the papers claim to have controlled for income. 

After that, you have two choices: high intensity interval training (HIIT), and using cross country skiing as your regular mode of transportation. 

Why those two? No one has proven an answer, but my wild ass speculation is that you raise VO2max by proving to your body that your existing VO2max is insufficient. You do this by operating at capacity. Since it’s impossible to operate at peak capacity for very long, this can be done in the form of interval training, or by working at near-peak capacity for so long that it uses up your reserves. Or so I surmise.

Back to the literature: within interval training, the number one most important property is still that you do it at all, followed by how much you do it, with one possible cheat. According to this meta-analysis even short interval, low volume, low calendar-time was beneficial, but in order to beat moderate-intensity exercise you need to work a little harder: intervals of >2 minutes, total time of >15 minutes, and at least 4 weeks (number of times per week was not specified, but in other papers it was 2-3).

What’s the cheat? Repeated Sprint Training (RST), in which you go absolutely balls out for 10 seconds and then take a nice 2-4 minute gentle stroll. I love RST because there’s a little bit of lag between working very hard and being miserable, and that lag is longer than the interval. By the time the misery catches up with me I’ve already stopped trying. So I’d really like to believe this, but ShortIT (10-30 second intervals) scored poorly relative to longer intervals, so there’s either some sort of horseshoe effect or the success of RST is a mirage. 

Here is the full chart from that paper, which is beautiful except for its absolutely incomprehensible labels. Translations below. 

Within Training Periods (how many weeks people exercised according to the plan), the options are short (<= 4 weeks), medium, and long term (>=12 weeks).

Within Session Volume, the options are low (<=4 cumulative minutes under load), medium, and high (>=16 minutes of work). Please join me in a moment of annoyance that  L sometimes mean smallest and sometimes biggest.

Within Work Intervals (duration of a single intense bout), the options are short, medium, long, very very short (SIT) (10-30s) and itty bitty (RST) (10s).

MICT stands for “moderate intensity cardio training”, aka non-HIIT exercise. CON stands for control. The longer you go (in calendar time) the less of an advantage HIIT has over MICT, which suggests they are both approaching the same asymptote, HIIT just gets there faster. 

SMD stands for “standard mean difference”, which is the difference of the means of the treatment and control groups, divided by the standard deviation. The size of SMD differs between the treatment groups, but you can round it to 3 ml O2/kg body weight/person. 

What if I already exercise?

In one study, even Olympic athletes were able to raise VO2max via HIIT training (albeit slower than couch potatoes). If you’re not specifically targeting peak capacity, you can probably improve it. However I believe this asymptotes, so if you’re already doing HIIT in particular there may not be much gains left on the table. The client who commissioned this research was a hard-core pilates practitioner and he did not find HIIT to increase his VO2. 

Next Steps

Iamnotadoctor, nor do I hold any other relevant qualifications. But if you’re full of inspiration to follow up on this, here is my suggested plan:

  1. Estimate your VO2Max as described above, or use the spreadsheet.
  2. Identify a form of exercise that is highly accessible to you, that can be done safely at very high intensity.
    1. The more of your body it uses the better, but prioritize lowering obstacles. if your office only has an exercise bike that’s better than needing to travel to an elliptical, even though the elliptical uses your arms and the bike doesn’t.
  3. If you’re new to exercise, spend a few sessions playing around on your activity of choice, to get a sense of where your limits are.
  4. If you believe the research on RST (10 seconds of peak exertion followed by 3 minutes of barely moving. If your environment is cold enough you shouldn’t even sweat), do that. 
  5. If you don’t believe the research on RST, gradually increase your time under intensity until you reach 4 non-continuous minutes under intense load.
    1. If your intense intervals are longer than 2 minutes they’re probably not actually peak intensity, so you should have at least 2. 
    2. Especially at first, aim for sustainability rather than peak achievement. If going 20% slower is the difference between quitting or sticking through it, slower is obviously the correct choice. You can build up over time. 
  6. After 6 weeks, estimate VO2max again. The meta-analysis described above suggests you can expect at least 1 MET (3.5 ml O2/kg body weight/min) over 6 weeks. 

Thanks to anonymous client and my Patreon patrons for supporting this post.

Can we rescue Effective Altruism?

Last year Timothy Telleen-Lawton and I recorded a podcast episode talking about why I quit Effective Altruism and thought he should too. This week we have a new episode, talking about what he sees in Effective Altruism and the start of a road map for rescuing it. 

Audio recording

Transcript

Thanks to everyone who listened to the last one, and especially our Manifund donors, my Patreon patrons, and the EAIF for funding our work.

Luck Based Medicine: No Good Very Bad Winter Cured My Hypothyroidism

I’ve previously written about Luck Based Medicine: the idea that, having exhausted all the reasonable cures for some issue, you are better off just trying shit rather than trying to reason more cures into existence. I share LBM success stories primarily as propaganda for the concept: the chance any one cure works for anyone else is <10% (sometimes much less), but a culture where people try things and share their results.

I’ve also previously written about my Very Unlucky Winter. My mattress developed mold, and in the course of three months I had four distinct respiratory infections, to devastating effect. A year later I am still working my way through side effects like asthma and foot pain. 

But, uh, I also appear to have cured my hypothyroidism, and the best hypothesis as to why is all the povidone iodine I gargled for all those respiratory infections illnesses.

Usually when I discuss fringe medicine I like to say “anything with a real effect can hurt you”, because it’s a nice catchall for potential danger. In this case, I can be more direct: anything that cures hypothyroidism has a risk of causing hyperthyroidism. The symptoms for this start with “very annoying” and end at “permanent disability or death”, so if you’re going to try iodine, it absolutely needs to be under medical supervision with regular testing. 

All that said…

I was first diagnosed with hypothyroidism 15 years ago, and 10 years ago tried titrating off medication but was forced back on. My thyroid numbers were in the range where mainstream MDs would think about treating and every ND, NP, or integrative MD would treat immediately. 

Low iodine can contribute to hypothyroidism, and my serum iodine tested at low normal for years, so we had of course tried supplementing iodine via pills, repeatedly, to no result. No change in thyroid and no change in serum iodine levels.

In January of the Very Unlucky Winter, I caught covid. I take covid hard under the best of circumstances and was still suffering aftereffects from RSV the previous month, so I was quite scared. Reddit suggested gargling povidone iodine and after irresponsibly little research, I tried it. My irresponsibility paid off in that the covid case was short and didn’t reach my lungs. I stopped taking iodine when I recovered but between all the illnesses, potential illnesses, and prophylactic use I ened up using it for quite a long period.

My memories of this time are very fuzzy and there were a lot of things going on, but the important bits are: I developed terrible insomnia, hand tremors, and temperature regulation issues. These had multiple potential explanations, but one of them was hyperthyroidism so my doctor had me tested. Sure enough, I had healed my thyroid to the point my once-necessary medication was giving me hyperthyroidism. 

Over the next few months I continued gargling with iodine and titrating my medication down. After ~6 months I was off it entirely. I’ve since been retested twice (6 weeks and 20 weeks after ceasing medication) and it looks like I’m clean. 

Could this have been caused by something besides iodine? I suppose, and I was on a fantastic number of pills, but I can’t figure out what else it could be. Hypothyroidism has a very short list of curable underlying causes, and none of them are treated by anything I was taking. 

So why did gargling iodine work when pills didn’t? It could be the formulation, but given my digestive system’s deepseated issues, I’m suspicious that the key was letting the iodine be absorbed through the mucous membrane of the throat, rather than attempting to through the gut. If that’s true, maybe I can work around my other unresponsive vitamin deficiencies by using sublingual multivitamins. I started them in June and am waiting to take the relevant test.

Thank you to my Patreon patrons for their support of this work. 

There is a $500 bounty for reporting errors that cause me to change my beliefs, and an at-my-discretion bounty for smaller errors. 

(Salt) Water Gargling as an Antiviral

Summary

Over the past year I’ve investigated potential interventions against respiratory illnesses. Previous results include “Enovid nasal spray is promising but understudied”, “Povidone iodine is promising but understudied” and “Humming will solve all your problems no wait it’s useless”. Two of the iodine papers showed salt water doing as well or almost as well as iodine. I assume salt water has lower side effects, so that seemed like a promising thing to check. I still believe that, but that’s about all I believe, because papers studying gargling salt water (without nasal irrigation) are few and far between. 

I ended up finding only one new paper I thought valuable that wasn’t already included in my original review of iodine, and it focused on tap water, not salt water. It found a 30% drop in illness when gargling increased in frequency from 1 time per day to 3.6 times, which is fantastic. But having so few relevant papers with such small sample sizes has a little alarm going off in my head screaming publication BIAS publication BIAS. So this is going in the books as another intervention that is promising but understudied, with no larger conclusions drawn. 

Papers

Estimating salivary carriage of severe acute respiratory syndrome coronavirus 2 in nonsymptomatic people and efficacy of mouthrinse in reducing viral load: A randomized controlled trial

Note that despite the title, they only gave mouthwashes to participants with symptoms.

This study had 40 participants collect saliva, rinse their mouth with one of four mouthwashes, and then collect more saliva 15 and 45 minutes later . Researchers then compared compared the viral load in the initial collection with the viral load 15 and 45 minutes later. The overall effect was very strong: 3 of the washes had a 90% total reduction in viral load, and the loser of the bunch (chlorhexidine) still had a 70% reduction (error bars fairly large). So taken at face value, salt water was at least as good as the antiseptic washes. 

(Normal saline is 0.9% salt by weight, or roughly 0.1 teaspoons salt per 4 tablespoons water)

[ETA 11/19: an earlier version of this post incorrectly stated 1 teaspon per 4 tablespoons. Thank you anonymous]

This graph is a little confusing: both the blue and green bars represent a reduction in viral load relative to the initial collection. Taken at face value, this means chlorhexidine lost ground between minutes 15 and 45, peroxide and saline did all their work in 15 minutes, and iodine took longer to reach its full effect.  However, all had a fairly large effect.

My guess is this is an overestimate of the true impact, because I expect an oral rinse to have a greater effect on virons in saliva than in cells (where the cell membrane protects them from many dangers). Saline may also inflate its impact by breaking down dead RNA that was detectable via PCR but never dangerous. 

The short-term effect of different chlorhexidine forms versus povidone iodine mouth rinse in minimizing the oral SARS-CoV-2 viral load: An open label randomized controlled clinical trial study

This study had a fairly similar experimental set up to the previous: 12 people per group tried one of three mouth washes, or a lozenge. Participants collected saliva samples immediately before and after the treatments, and researchers compared (a proxy for) viral loads between them.

Well, kind of. The previous study calculated the actual viral load and compared before and after. This study calculated the number of PCR cycles they needed to run before reaching detectable levels of covid in the sample. This value is known as cycle threshold, or Ct. It is negatively correlated with viral load (a smaller load means you need more cycles before it becomes detectable), but the relationship is not straightforward. It depends on the specific virus, the machine set up, and the existing cycle count. So you can count on a higher Ct count representing an improvement, but a change of 4 is not necessarily twice as good as a change of 2, and a change from 30->35 is not necessarily the same as a change from  20->25. The graph below doesn’t preclude them doing that, but doesn’t prove they did so either. My statistician (hi Dad) says they confirmed a normal distribution of differences in means before the analysis, which is somewhat comforting. 

This study found a significant effect for iodine and chlorhexidine lozenges, but not saline or chlorhexidine mouthwash. This could be accurate, an anomaly from a small sample size, or an artifact of the saline group having a higher starting Ct value (=lower viral load) to start from.

Prevention of upper respiratory tract infections by gargling: a randomized trial

This study started with 387 healthy volunteers and instructed them to gargle tap (not salt) water or iodine at least three times a day (the control and iodine group also gargled water once per day). For 60 days volunteers recorded a daily symptom diary. This set up is almost everything I could ask for: it looked at real illness over time rather than a short term proxy like viral load, and adherence was excellent. Unfortunately, the design were some flaws. 

Most notably, the study functionally only counted someone as sick if they had both nose and throat symptoms (technically other symptoms counted, but in practice these were rare). For a while I was convinced this was disqualifying, because water gargling could treat the pain of a sore throat without reducing viral load. However the iodine group was gargling as often as the frequent watergarglers, without their success. Iodine does irritate the throat, but gargling iodine 3 times per day produced about as much illness as water once per day. It seems very unlikely that iodine’s antiviral and throat-irritant properties would exactly cancel out. 

Taking the results at face value, iodine 3x/day + water 1x/day was no better than water 1x/day on its own. Water 3.6x/day led to a 30% reduction in illness (implicitly defined as lacking throat symptoms)

The paper speculates that iodine failed because it harmed the microbiome of the throat, causing short term benefits but long term costs. I liked this explanation because I hypothesized that problem in my previous post. Alas, it doesn’t match the data. If iodine traded a short term benefit for long term cost, you’d expect illness to be suppressed at first and catch up later. This is the opposite of what you see in the graph for iodine. However it’s not a bad description of what we see for frequent water gargling – at 15 days, 10% more of the low-frequency water garglers have gotten sick. At 50 days it’s 20% more – fully double the proportion of sick people in the frequent water gargler group. For between 50 and 60 days, the control group stays almost flat, and the frequent water garglers have gone up 10 percentage points. 

What does this mean? Could be noise, could be gargling altering the microbiome or irritating the throat, could be that the control group ran out of people to get sick. Or perhaps some secret fourth thing.

None of the differences in symptoms-once-ill were significant to p<0.05, possibly as a result of their poor definition of illness, or the fact that the symptom assessment was made a full 7 days after symptom onset.

Assuming arguendo that gargling water works, why? There’s an unlikely but interesting idea in another paper from the same authors, based on the same data. They point to a third paper that demonstrated dust mite proteins worsen colds and flus, and suggest that gargling helps by removing those dust mite proteins. Alas, their explanation of why this would help for colds but not flus makes absolutely no goddamn sense, which makes it hard to trust an already shaky idea. 

A boring but more reasonable explanation is that Japanese tapwater contains chlorine, and this acts as a disinfectant. 

Dishonorable Mention: Vitamin D3 and gargling for the prevention of upper respiratory tract infections: a randomized controlled trial

I silently discarded several papers I read for this project but this one was so bad I needed to name and shame.

The study used a 2×2 analysis examining vitamin D and gargling with tap water. However it was “definitively” underpowered to detect interactions, so they combined the gargling with and without vitamin D vs. no gargling with and without D into groups, without looking for any interaction between vitamin D and gargling. This design is bad and they should feel bad. 

Conclusion

Water (salted or no) seems at least as promising an antiviral as other liquids you could gargle, with a lower risk of side effects. So if you’re going to gargle, it seems like water is the best choice. However I still have concerns about the effect of longterm gargling on the microbiome, so I am restricting myself to high risk situations or known illness. However the data is sparse, and ignoring all of this is a pretty solid move. 

Thank you to Lightspeed Grants and my Patreon patrons for their support of this work. Thanks to Craig Van Nostrand for statistical consults.

There is a $500 bounty for reporting errors that cause me to change my beliefs, and an at-my-discretion bounty for smaller errors. 

Chaos Theory in Ecology

One of the reasons I got into chaos theory as a model paradigm shift was the famous Gleick book on chaos. One of the reasons I believed the Gleick book was trustworthy was that its description of chaos in ecology and population biology matched what I learned in college, 25 years later. Recently I learned that the professor who taught me was one of maybe 3 theoretical ecologists in the country who taught or believed in chaos having applications to ecology at the time. Perhaps I should have been more suspicious that he was writing his own textbook.

However chaos is back in vogue in ecology, and attempts are in progress to make it pay rent. In this latest podcast episode I talk with Drs Stephen Munch and Tanya Rogers (both of work at NOAA, but were speaking as private citizens) about their application of chaos theory to ecology and fisheries management.

Most interesting takeaways:

  • You can translate some physics techniques into ecology, despite the smallest dataset in physics being 100x larger than the largest ecological dataset. 
  • The work discussed in this episode, and perhaps all of chaos in ecology, is downstream of one physicist turned mathematician and biologist (Robert May).
    • Doyne Farmer (a founding chaotician) talks about physics colonizing finance and economics due to a bad job market, which has me thinking scientific progress comes from hyping a field so the smartest people get deep into it, and then denying them jobs so they’re forced to colonize other fields.
  • Empirical Dynamical Modeling allows you to substitute past observations of known variables for current observations of unknown variables. This gets you a longer prediction horizon than you could otherwise get with only the known variables.
  • There is a salmon forecasting prize and it pays $2000-$5000 cash

I’ve had some requests to include transcripts in the body of the text rather than a separate document. I’ll try that this time and if you don’t like, please complain.

Thank you to my Patreon Patrons for their support.

Chaos in Theoretical Ecology

[00:00:00] Elizabeth: Hey, this is Elizabeth Van Nostrand. Today I’m going to talk to two guests about the influence and applications of chaos theory on population biology and ecology. 

[00:00:10] Stephen Munch: I’m Steve Munch. I am , an evolutionary ecologist, a mathematical ecologist. I work at , NOAA Fisheries, and I’m an adjunct in Applied Math at UC Santa Cruz. I have an abiding interest in applying math to ecological and evolutionary problems. And for the past decade or so, I’ve been thinking a lot about chaos and nonlinear forecasting and its potential role as a tool in ecosystem management.

[00:00:38] Tanya Rogers: I’m Tanya Rogers. I’m a research fish biologist here at NOAA Fisheries. My background is in ecology, and I got more interested in population dynamics and modeling in graduate school and in how that can be applied to solving ecological problems. Steve was my postdoctoral advisor and we continue to collaborate on projects. 

[00:01:02] Elizabeth: You guys co wrote several papers on chaos and empirical dynamical modeling in biology and especially in conservation and wildlife management.

[00:01:12] Stephen Munch: Primarily fisheries, but, the math is the same, whether it’s a bird or a fish. 

[00:01:16] Elizabeth: My recollection from college was fisheries was the place one made money with population biology.

[00:01:24] Tanya Rogers: Well, I think fisheries certainly makes itself a lot of money and there’s a lot of interest in ensuring that fisheries are sustainable and profitable. And so there’s a lot of interest in making sure that our management is as good as it can be and that we’re, using the best models possible for fisheries. 

[00:01:45] Stephen Munch: My ph. D. advisor once said that, uh, you know, a lot of people in the oceanography program look down on fisheries, but fisheries employs more marine ecologists than any other subdiscipline. So it’s not a bad bet if you would like to have a job after grad school. 

[00:02:01] Elizabeth: And you’re applying chaos theory right now to fisheries management, right?

[00:02:05] Stephen Munch: Well, I’m applying it right now to fisheries data in the hopes of getting this stuff used in management. There’s the fishery for shrimp in the Gulf of Mexico, which is a federally managed fishery where they’re exploring using EDM to set harvest policy and next year’s, landings targets.

[00:02:28] Elizabeth: Uh, could you explain EDM before we go further?

[00:02:32] Tanya Rogers: empirical dynamic modeling, or EDM, is a way of.

[00:02:35] Tanya Rogers: Modeling a dynamical system when we have incomplete knowledge and incomplete data about that system, as is often the case in ecosystems, and it does so in a way that preserves the dynamical properties of that system, including chaos and allows us to make better short term predictions in chaotic systems without making a lot of assumptions.

[00:02:55] Tanya Rogers: So EDM has two main features. The first is that it’s a non parametric approach that makes few assumptions about the functional forms of relationships. And the second is that it uses time lags to account for unobserved variables. To explain this further, the relationship between some values, say fish abundance and its past values is going to follow the relationship in the data rather than some predefined functional form.

[00:03:22] Tanya Rogers: It can also happen that some of the apparent noise around this relationship can be explained by adding additional dimensions. For example, the abundance of a prey species. So perhaps we can predict fish abundance better using past fish abundance and past prey abundance. Now it may be the case that we don’t have data on prey abundance, in which case you can actually substitute an additional lag of fish abundance.

[00:03:45] Tanya Rogers: So not just fish last year, but also fish two years ago. And you’ll get a different looking relationship, but it will still do a pretty good job at predicting abundance. So why this works has to do with Taken’s delay embedding theorem, but the point is that missing variables create memory in the system.

[00:04:04] Tanya Rogers: And so you can use lags of observed variables as substitutes for unobserved variables. What this means practically in population biology is that we can model population size as some function of past of lags of past population sizes. And this function is fit using nonparametric methods. What this means practically in population biology is that we’re modeling population size as some function of lags of past population sizes, and this function is fit using nonparametric methods.

[00:04:38] Tanya Rogers: So chaos and using methods that can accommodate chaos, like EDM, matters for ecological forecasting because it affects how far realistically you can predict into the future. So chaotic dynamics, unlike random dynamics, are deterministic and predictable in the short term, um, And so if chaos is can mischaracterize as noise around an equilibrium, you’re going to miss out on that short term predictability and make worse forecasts than you could otherwise.

[00:05:07] Tanya Rogers: Long term forecasts will also be inaccurate and overconfident if you assume the system is just going to converge to an equilibrium. In terms of ecological inference, the sensitivity to initial conditions that results from chaos might also vary. Help explain why some experimental replicates with seemingly identical starting conditions sometimes end up in totally, totally different places

[00:05:28] Elizabeth: How do you determine sensitivity to initial conditions or whether it’s just random?

[00:05:37] Tanya Rogers: well, part of it is determining whether or not it’s chaotic or not. 

[00:05:40] Tanya Rogers: There are a variety of methods for detecting chaos, which we explore in our paper. Many of them use EDM or a similar form of time delay embedding to reconstruct the dynamics in a flexible way. And from that, estimate some quantities such as the Lyapunov exponent, which quantifies the deterministic divergence rate of nearby trajectories.

[00:06:02] Speaker 4: The idea is actually really simple, that if you take two points, two states of the system that are initially close together. In day to day experience, the things that we think of as, uh, predictable, as deterministic, you know, you do the same thing twice in a row, you expect to get the same answer. You do slightly different things twice in a row, you expect to get slightly different answers, right? And that’s, that’s where chaos is really different.

[00:06:30] Speaker 4: You do slightly different things and you get really different answers, you know, if you wait long enough. That’s the important part. That’s the difference between something that’s random and something that’s chaotic. Something’s random, you do two, two slightly different things. You get two different answers immediately.

[00:06:47] Speaker 4: Whereas in chaos, things become effectively random if you wait long enough. But there’s the period between then and now where you can see the dynamics unfolding and that makes things predictable, at least over the short term.

[00:07:03] Tanya Rogers: So that paper we explored several different methods that um, are used to detect chaos. Many of them were developed in physics but had not been tested on ecologically, or time series of ecologically relevant lengths, which is to say short ones, and with uh, Ecologically relevant levels of observation error.

[00:07:26] Stephen Munch: To give you some context for that, a lot of those papers test things on short time series, which have 5000 observations. Ecological time series that are long are 50 years, which is one time

[00:07:42] Elizabeth: That would be an astonishing data set

[00:07:45] Stephen Munch: Right. Yeah, so what, you know, two people’s careers is 50 years, right, of ecological data collection.

[00:07:55] Stephen Munch: So, very different standards in terms of, you know, time series length. So, it was an open question whether any of these things would work on ecologically relevant timescales. And, and there are definitely things that would miss, um, having only 50 time points that you, you would love to see if you had 5, 000, but.

[00:08:15] Tanya Rogers: we found three of the six methods we tried did not work very well at all, but three performed reasonably well, and in the presence of observation error, they were more likely to not detect chaos when it’s present than to detect chaos when it’s absent. 

[00:08:31] Elizabeth: This is one of the things that attracted me to chaos initially was that techniques developed in one field could be applied to a seemingly completely unrelated field. So I would love if you could get into details on like how you chose what to port over from physics and what you had to change.

[00:08:51] Tanya Rogers: So I think whether we’re talking about complex physical systems or complex ecological systems, the concepts are very much the same, and so the main difference, I think, are in terms of data availability, observation error, the time scales on which did the dynamics occur and also how well we understand the underlying dynamics

[00:09:10] Stephen Munch: , the biggest hurdle to having chaos come over to biology is all of the mathematical jargon 

[00:09:16] Elizabeth: So what you guys discovered is maybe there’s many more chaotic, not random ecosystems or species than we thought. And this has implications for managing the population in the short run.

[00:09:29] Tanya Rogers: In our study, we found that chaos wasn’t rare in a database of , ecological population time series. It wasn’t the majority of time series, but chaos wasn’t rare enough to be ignorable, particularly for short lived species.

[00:09:42] Stephen Munch: So since, , chaos theory reached its heyday in the late 90s, early 2000s, people have Arrived at the conclusion that chaos is rare in ecology and rare is hardly ever defined quantitatively, right? People frequently say, well, chaos is rare. Therefore, we can are safe and assuming equilibrium. Chaos is rare. Therefore, uh, we are safe in using linear models to approximate dynamics. 

[00:10:14] Elizabeth: Am I correct that that doesn’t make sense regardless of chaos? . You can have non chaotic, non linear dynamics.

[00:10:22] Stephen Munch: Right. There is that. But most of the time, the context is we like to imagine that ecological systems are stable and given that they are stable, they will recover from some small perturbation

[00:10:37] Elizabeth: That there’s some equilibrium point that, that is definitely self reinforcing and may be an attractor.

[00:10:43] Stephen Munch: Yeah. Um, and, and so a lot of the time the math comes after you assume stability. Stability is the foundation from which we’re going, then you can approximate things with a linear dynamics reasonably well.

[00:10:58] Stephen Munch: You can have some hope of assuming an equilibrium and not being terribly wrong, but if things are chaotic and not stable then that’s, that’s not true. 

[00:11:10] Elizabeth: So if you have a fish population that you are trying to manage to get maximum yield from, if you think there’s some equilibrium, what you try to do is not disturb things away from the equilibrium too much. But , what happens if it’s chaotic ?

[00:11:24] Stephen Munch: So I think probably in terms of management, the, uh, the biggest change in perspective is that a state dependent policy can do a lot better than one that is just sort of do the same thing all the time. If you imagine that things are equilibrium and stable, then you can set a harvest policy and let it go.

[00:11:50] Stephen Munch: And, sometimes you’ll be over, sometimes you’ll be under, but all in all it’ll come out in the wash and you’ll end up with more or less the average return will be that what you predict. for the steady state. If things are chaotic, when you’re over or under, you’ll just keep going off in some direction that you hadn’t really been expecting.

[00:12:09] Stephen Munch: And, uh, so a better policy would be one where you say, okay, uh, when we’re in this state, you can harvest this much when, when things, when the fish abundance is low or has been low for several years, you need to change to a different harvest strategy. When things have been high for several years, you need to change to a different strategy.

[00:12:26] Stephen Munch: And you can do a lot better than, by trying to stick with exactly the same thing.

[00:12:31] Elizabeth: Is anyone doing that?

[00:12:33] Stephen Munch: That is what we’re trying to do with the shrimp fishery in the Gulf of Mexico, what we’re trying to do with the squid fishery in California. Importantly, both of these are short lived species that have huge fluctuations in abundance, that the typical mental model is that dynamics are being driven by Unpredictable environmental variation., in contrast to long lived species like things like on this coast like the rockfish , which, live typically for decades and their dynamics are much smoother, , and so a lot of the sort of standard fisheries things work out okay because the dynamics are so slow to change. But in these short lived species, the dynamics are much faster and they fluctuate much more dramatically, which is why I think that there’s a reason to try applying EDM or chaos theory to managing them. And it turns out that in those, these species that we typically say, oh, you know, most of that fluctuation is due to environmental variation, it turns out that we actually have reasonably good predictability, two or three times the predictability that we have with our standard steady state models.

[00:13:48] Elizabeth: Oh, so there was, a rapid change that was put down to change in the environment and , you can predict that it would have happened using a deterministic model.

[00:13:57] Stephen Munch: Yeah, this is where we’re using the empirical dynamic modeling stuff where the idea is you use past values of the abundance or whatever the system variable is. But in this case, it’s the past abundances of Shrimp or squid to tell us where the next abundance is likely to go now. It’s not just this year’s.

[00:14:20] Elizabeth: Tanya, are you working on that too? Mm hmm. Um,

[00:14:25] Tanya Rogers: I’ve been helping support the applications that Steve is discussing, exploring how we can use EDM to improve predictability and manage species. And also when, where, and in which species we expect to see chaos in ecology more broadly. So the idea is if we use.

[00:14:44] Tanya Rogers: If using time lags of our observed variables as additional coordinates gets us better predictions, that tells us there are state variables missing and we could, we could potentially do better if we had additional data on those variables, if we can figure out what they are.

[00:14:58] Elizabeth: So the idea is like, if you have a 50 variable system, if you could fill in every variable, then that would be enough. You could just use your model to predict the next state. But if you have five of those variables if you use just those five predictions are bad, but if you track those five variables through the past, that gives you some insight into the missing 45.

[00:15:23] Stephen Munch: Right. Yeah. 

[00:15:24] Stephen Munch: There’s a mathematical subtlety there and I don’t know if this is of interest, but if, if you start with a system that’s 50 variables, um, most of the time in things that have chaos, they are contracting. The dynamics actually don’t fill that 50 dimensional space.

[00:15:44] Stephen Munch: They’re actually constrained to often a much lower dimensional shape. called the attractor for the system. And it’s, it’s the dimension of that that tells you how many lags you need or how far into the past you need to go to reconstruct dynamics. 

[00:16:01] Elizabeth: So, if the attractor has five dimensions, you need to go five steps into the past?

[00:16:06] Stephen Munch: It’s twice the attractor dimension plus one. So, 11.

[00:16:11] Elizabeth: Interesting. Is it possible to figure out the dimensions of the state of the attractor? How do you do that in practice?

[00:16:22] Tanya Rogers: The simplest way to go about that is to forecast with one, then two, and then three dimensions, and then continue until you get to a point where the prediction accuracy saturates.

[00:16:31] Elizabeth: So you’re applying EDM to fisheries in particular. You’ve got the shrimp, you’ve got the squid, when will you know if it’s working? How will you know if it’s working?

[00:16:41] Stephen Munch: Well, there’s working and there’s working, right? I mean, so in terms of being able to make better predictions than we can with the models we’ve been using so far, that is working now. In terms of knowing whether our revised harvest policy is going to work better than our historical harvesting policy, that’s going to take some time.

[00:17:06] Stephen Munch: You can really only get there by doing it. And so it’s kind of hard. It’s a hard sell, right? To move to a whole new branch of doing things in real life when, uh, you can’t really demonstrate that you’re sure that it’s going to work

[00:17:20] Elizabeth: I’m gonna ask you both to speculate wildly. Assuming you waved a magic wand and some fishery management of your choice started using your system, what would the improvement be?

[00:17:34] Stephen Munch: well, I have no idea about that, but, uh, I will uh, when we’ve simulated things, we typically do somewhere between 10 to 50 percent better harvest, depending on how the system really works.

[00:17:49] Elizabeth: When you say better, you mean more accurate.

[00:17:52] Stephen Munch: I mean, in terms of 10 to 50 percent more in terms of sustainable, we almost always do somewhere between, 20 to 50 percent better in terms of prediction accuracy.

[00:18:06] Elizabeth: That sounds really impressive. 

[00:18:08] Tanya Rogers: So the idea is that, so the idea is that if populations are fluctuating a lot, we can predict those fluctuations and then harvest accordingly. So this way fishers aren’t over harvesting when abundances are low and won’t miss out on fish they could harvest sustainably when abundances are high. So for instance, I work a bit on salmon forecasting and there’s a lot of interest in making accurate predictions of salmon runs or salmon returns so they can determine how much they can, people can safely harvest versus allow to return to the spawning grounds to start the next generation.

[00:18:44] Tanya Rogers: For my work at least, I developed a new forecast model for the Sacramento River winter run Chinook salmon, which is an endangered run. Managers here want to know how many of these fish are going to be returning in order to set harvest rates so that the endangered run isn’t overly impacted by ocean fishing on non endangered runs, since Chinook salmon are hard to tell apart when they’re out in the ocean.

[00:19:10] Tanya Rogers: And this is one where the time series are a little too short to do EDM with time lags. There’s only about 20 years of data, but we’ve been able to use non parametric regressions and other ways to try and get better forecasts. And that model is currently in use for management and it appears to be doing a much better job than the population model they’d been using previously

.

[00:19:31] Elizabeth: So you did make substantial improvements in Harvest.

[00:19:34] Tanya Rogers: Well, we’ve made improvements, at least in prediction. Salmon in California face a lot of challenges, not just fishing, and the fishery right now in California is closed due to low abundances of all stocks, so we’ll have to wait and see. 

[00:19:48] Tanya Rogers: recently the salmon prize forecasting competition started. It’s something I participated in on the side for fun outside of work. And they’ve been looking for people to develop models and submit forecasts for different salmon runs. This year’s was for sockeye in three different systems with the hope of finding better prediction models than the ones that are currently in use

[00:20:12] Elizabeth: Going back to, , some of the earlier work we were discussing, steve, you mentioned you were bringing over a lot of stuff from physics, but it needed to be adapted. 

[00:20:23] Elizabeth: One of the reasons I got interested in chaos in particular was it seemed like it should give you the ability to like do work in one field and port it to five different fields. I’m really curious, for every step of this, starting with how did you find the thing, the tools you ended up porting over?

[00:20:43] Stephen Munch: Um. So the, the main tool is the empirical dynamic modeling stuff, which had its, uh, origins in the sort of physics literature in the, um,

[00:20:59] Elizabeth: Oh, so EDM came over from physics do you know how it made that leap?

[00:21:04] Stephen Munch: Yeah, so, uh, there are a couple of seminal papers, uh, in the, uh, late 80s, early 90s. So, Sugihara and May in 1990, uh, showed that you could do this nonlinear forecasting, uh, stuff and in an ecological setting and that, you know, that, that, um, you could make predictions of ecological dynamics without having to have a specific model formulation.

[00:21:37] Stephen Munch: A little bit prior to that, Bill Schaefer and, Mark Kot had a paper on, um, using sort of time delays to try and reconstruct the a low dimensional projection of the dynamics. So their idea was very similar in spirit, using the sort of time lags to reconstruct things, but it , didn’t quite take off as a tool for making practical forecasts.

[00:22:05] Stephen Munch: So that’s, that’s what Sugihara and May managed to do. Um, but the, uh, the idea of the time delays in lieu of A complete set of state variables comes from, , initially a paper by Packard et all, uh, and then, um, a rigorous proof of that idea. So that’s in 1980 and then a rigorous proof of the idea in 1981 by Takens.

[00:22:35] Elizabeth: There were specific scientists who found it somehow

[00:22:38] Elizabeth: I am very curious about the step before the papers getting, get written. what drove people to find something either outside their field or why was , someone already working in an interdisciplinary way and porting over these tools?

[00:22:54] Stephen Munch: So the really early, right, like in the 70s, Bob May, John Beddington, Bill Schaeffer, they were all working on, chaos in ecological dynamics as like from a theoretical point of view and they were they’re hoping they’re showing that like with really slow dimensional models you can get nearly effectively random looking dynamics and maybe that’s why ecological dynamics looks as messy as it does but there wasn’t any easy way to connect that to Ecological time series.

[00:23:28] Stephen Munch: , there were a couple of attempts to do that by fitting low dimensional models to some time series data. Those generally concluded that things were not chaotic. Bob may actually has a really super quote. In one of those papers that says fitting the what is likely to be the high dimensional dynamics of ecological system to this low dimensional model does great violence to the reality of ecology .

[00:23:53] Stephen Munch: That didn’t work. Um, it was a reasonable thing to try when you don’t have too much data, but that, that idea just doesn’t really work. And, um, And then the time delay embedding stuff got invented. And those guys were busy thinking of, they were part of the chaos community.

[00:24:08] Stephen Munch: It wasn’t like, uh, you know Bob may just sort of saw that said, Oh yeah, I can grab that and bring it over from, uh, like, Okay. Without any initial sort of prep, he was already sort of actively participating in sort of theoretical chaos stuff.

[00:24:26] Elizabeth: When was he doing that?

[00:24:28] Stephen Munch: so his early stuff on chaos and ecological dynamics happens in the early 1970s.

[00:24:35] Stephen Munch: And so when Taken’s delay embedding theorem happens, it does take a little while for people to pick it up and turn it into a practically useful tool. The ecologists and the physicists are so completely separate that it’s a miracle that it makes it over 

[00:24:50] Elizabeth: There were people who were already straddling the borders?

[00:24:53] Stephen Munch: Yeah,

[00:24:54] Elizabeth: Yeah. That’s hard to do in academia, isn’t it? .

[00:24:57] Stephen Munch: Well, Bob May started in physics. and came over to ecology from, physics. So, um, and there’ve been a lot of people who’ve gone that way. I, he’s arguably the most successful physicist turned ecologist by a lot, but, um, there are surprisingly few people who go the other way.

[00:25:19] Elizabeth: Yeah, I do notice physics seems to send out a lot more immigrants.

[00:25:26] S+T: I don’t know. Maybe the physics job market is just really tight.

[00:25:30] Elizabeth: I was just reading Doyne Farmer’s book on chaos and finance, and what he says is , “well, there weren’t any more physics jobs, but they would pay us so much money to do finance”. 

[00:25:40] Stephen Munch: Yeah.

[00:25:42] Stephen Munch: Those were good times. 

[00:25:44] Stephen Munch: One of the sort of really interesting things about sort of theoretical ecology and then applied theoretical ecology and then like real like, um, boots on the ground ecology is the level of math involved is like an order of magnitude between each one. So the theoreticians are real mathematicians 

[00:26:06] Stephen Munch: And then the folks who do sort of quantitative fisheries management are, you know, a jack of all trades. They know just enough math to do one thing, just enough math to do another thing, trying to put it all together with some statistics. And then there are the people who collect data and those people often know very little math. If there was a, , physics like revolution in, , theoretical ecology, I’m not sure, as one of the sort of mid level guys. I’d be aware of it, 

[00:26:34] Elizabeth: interesting. 

[00:26:37] Elizabeth: In weather, which is so incredibly complicated, the big breakthrough was ensemble forecasting. That you make a bunch of different forecasts jiggling your assumptions a little bit and that’s how you get 30 percent chance of rain because 30 percent of nearby worlds produced rain.

[00:26:55] Elizabeth: Has ensemble forecasting been tried in ecology or in wildlife management? 

[00:26:59] Stephen Munch: I’ve definitely run across papers where people have talked about ensemble forecasts for ecological dynamics or even super ensemble forecasts. But, I’m not aware that it’s made an enormous difference in terms of the predictions.

[00:27:14] Stephen Munch: I think the maybe the Biggest reason for that is that uh There aren’t too many people that I’m aware of who argue that the Navier Stokes equations, the things that govern the fluid dynamics, that governs the weather, right, are wrong, right? We all kind of accept that the equations are the equations of fluid dynamics.

[00:27:35] Stephen Munch: And so the real uncertainties are in how you handle The, the boundaries. How do you model the mountains? How do you model the clouds? Those are the parts where we’re not certain. And so if we vary those and we average over some amount of uncertainty in those boundary conditions and the initial conditions, we can sort of take care of some of that and sort of push a little farther into the future in terms of of how far we can make reasonable predictions.

[00:28:01] Stephen Munch: In ecology, on the other hand, there, there aren’t the equivalent of the Navier Stokes equations. There isn’t some like first principles model of how an ecosystem works that’s sufficient to make the kinds of predictions you might want to

[00:28:14] Elizabeth: That’s why you end up with something like EDM where you don’t need to know what you don’t know.

[00:28:19] Stephen Munch: there are two pillars to EDM. the first is what we talked that you can accommodate the fact that you have incomplete observations.

[00:28:27] Stephen Munch: That is, you haven’t seen all of the state variables of system using lags of the observables. That’s, that’s one pillar. The second pillar of EDM is that we’re not gonna try and write down equations. It’s a very simple that collection of variables turns into the future states of those variables. We’re instead going to try and infer that directly from what we’ve seen in the past.

[00:28:48] Stephen Munch: And so sort of combination of using lags to, as surrogate variables and using sort of nonparametric or super flexible data driven approaches to modeling, to turning the past states of the system into the future. That’s the second part that’s really important. 

[00:29:05] Elizabeth: What got you interested in that in grad school?

[00:29:08] Tanya Rogers: Uh, I guess. It was meeting Steve. I came to do an internship here at NOAA when I was in graduate school for a few months. I started working with Steve and, um, discovered that he invented and created new methods for analyzing data, which I did not realize was a thing as a just of existing methods, and I thought that was really cool and that he, he has a really cool way of just analyzing data.

[00:29:37] Tanya Rogers: Thinking about ecosystems and how species interact from a mathematical perspective that, um, I think brings a lot of insight and he made population dynamics interesting. I previously did a lot of community ecology work , collected a lot of data myself was mostly counting things. I did experiments in the labs, and this was just kind of a different approach that I thought was valuable.

[00:29:59] Tanya Rogers: And I, as part of. by working. That’s why I think I got this job at NOAA is that it can kind of merge the like mathematical approaches and field approaches and

[00:30:10] Elizabeth: is like the, the central tier that Steve was talking about is you have some people who are doing, my dad’s an applied mathematician so he would call them recreational mathematicians, and you have the boots on the ground people, and then you’ve got the sort of interfacers.

[00:30:27] Tanya Rogers: yeah, that’s us. So Steve is a very good interfacer. I definitely started as like a boots on the ground. I still do some of that data collection work myself. And I think that brings a valuable perspective in terms of understanding the complexity of ecosystems and where the data come from. and sources of error.

[00:30:44] Tanya Rogers: And even just like the natural history of some of these systems and what would make sense in terms of modeling them. I try and bring that perspective to my job as like fisheries ecologist and someone who helps with forecasting and management and stock assessments. And then in terms of research, I continue to collaborate with Steve on a bunch of different projects related to chaos and population dynamics and predictability and ecology.

[00:31:08] Stephen Munch: And Tanya provides an irreplaceable anchor to reality. Like I will go off and some like cool. Thing and some statistical mechanics and I’ll be like, oh, what do you think about this? She’s like why how will we use that steve? Like what is that for? And uh that sort of you know sounding board for like what is the practical value of this thing?

[00:31:30] Stephen Munch: How are we going to do it? What’s is it going to work on the data that we have available? Is uh, just just incredible plus I think tanya’s selling herself a bit short. She’s also Incredibly talented as a scientist and is great at getting things done and a great writer

[00:31:44] Elizabeth: you guys found each other, more or less at random. Like this wasn’t a purposeful pairing? 

[00:31:50] Stephen Munch: So actually, it’s a guy named Brian Wells, who Tanya was actually here to work with. He said, Oh, you know, you might get something out of talking to Steve. And he, so he introduced us. And then I found out that Tanya had data that was actually really great for applying a method that I’d cooked up.

[00:32:09] Stephen Munch: And, um, and so we, and after that, we really hit it off. 

[00:32:12] Tanya Rogers: Yes, that was my third dissertation chapter. And then Steve offered me a postdoc and I came out worked with them a bit. And then I got the job that I currently have, uh, at NOAA in the same office working for a different group, but we continue working together.

[00:32:29] Elizabeth: , that’s all I had. Thank you so much for your time. Have a great day guys.

[00:32:32] Stephen Munch: thanks. You too. 

Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now)

~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit. 

Audio recording

Transcript

Some highlights:

Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.

Thanks to my Patreon patrons for supporting my part of this work. You can support future work by joining Patreon, or contribute directly (and tax-deductible-y) to this project on Manifund.

AI research assistants competition 2024Q3: Tie between Elicit and You.com

 

Summary

I make a large part of my living performing literature reviews to answer scientific questions. For years AI has been unable to do anything to lower my research workload, but back in August I tried Perplexity, and it immediately provided value far beyond what I’d gotten from other tools. This wasn’t a fair comparison because I hadn’t tried any other AI research assistant in months, which is decades in AI time. In this post I right that wrong by running two test questions through every major tool, plus a smaller tool recommended in the comments of the last post

Spoilers: the result was a rough tie between You.com and Elicit. Each placed first on one task and was among top-3 in the other.

Tasks + Results

Tl;dr:

  • You.com had a small edge in searching for papers, followed by Elicit and Google Scholar. ChatGPT was absolute garbage.
  • Elicit, Perplexity, and You.com all surfaced the key piece of information when asked for analysis, with Elicit’s answer being the most concise. None of the other tools managed this.
  • You.com and Perplexity were tied for favorite UI, but I haven’t played with You.com very much.
  • You.com boasts a larger list of uses than Perplexity (which is narrowly focused on research), but I haven’t tried them out.

Finding papers on water gargling as an antiviral

I’m investigating gargling with water (salt or tap) as a potential antiviral. I asked each of the tools to find relevant papers for me.

ChatGPT was asked several versions of the question as I honed in on the right one to ask. Every other tool was asked “Please list 10 scientific papers examining gargling with water as a prophylactic for upper respiratory infections. Exclude nasal rinsing”. This is tricky because almost all studies on gargling salt water include nasal rinsing, and because saline is used as a control in many gargling studies.

Every tool correctly returned 10 results except for Elicit and Google Scholar, which by design will let you load papers indefinitely. In those cases I used the first 10 results.

PaperReal, relevant resultsProbably hallucinationsNotes
Perplexity- initial? The formatting was bad so I asked Perplexity to fix it
Perplexity- asked to format ^42 
ChatGPT 4o asking for “papers” without specifying “scientific”0 unusable
ChatGPT 4o specifying “scientific papers” about gargling as a treatment28 
ChatGPT 4o specifying scientific papers about gargling as a prophylactic0 unusable
ChatGPT o117Citation links went to completely unrelated papers
Claude 3.5 Sonnet22 
Elicit31 
You.com4 + 2 partial credits0 
Google Scholar40Not AI

You can see every response in full in this google doc.

I did not ask You.com for a picture but it gave me one anyway. It did not receive even partial credit for this.

Hepcidin

My serum iron levels went down after a series of respiratory illnesses, and on a lark I asked Perplexity if this could be related. Perplexity pointed me towards the hormone hepcidin and this paper, suggesting that respiratory illness could durably raise hepcidin and thus lower blood iron. Knowledge of hepcidin pointed me in the right direction to find a way to lower my hepcidin and thus raise my iron (this appears to be working, although I don’t want to count chicken before the second set of test results), so I was very impressed. This was one of two initial successes that made me fall in love with Perplexity.

I asked the other AI tools the same question. Elicit gave a crisp answer highlighting exactly the information I wanted and nothing else. Perplexity gave a long meandering answer but included hepcidin in its first bullet point. You.com gave an even longer answer in which hepcidin was included but hard to find. Everyone else gave long meandering answers that did not include hepcidin and so were worthless.

You can see the full results in the same google doc.

(Lack of) Conflict of interest

I received no compensation from any of the companies involved. I have social ties to the Elicit team and have occasionally focus grouped for them (unpaid). Months or possibly years ago I mentioned my desire to do a multitool comparison to an Elicit team member. At the time they offered me a free month to do the comparison, but their pricing structure has since made this unnecessary, sothey’ll find out about this post when it comes out. I have Perplexity Pro via a promotion from Uber.

Conclusions

After seeing these results I plan on playing with You.com more. If the UI and expanded uses turn out like I hope I might be loyal to it for as many as three months before it’s been surpassed.

There are two major features I’m looking for before I could consider giving up reading papers myself (or sending them to my statistician): determining if a statistical tool was appropriate for the data, and if an experimental design was appropriate for the question. I didn’t even bother to formally test these this round, but it wouldn’t shock me if we got there soon.

Applications of Chaos: Saying No (with Hastings Greer)

Previously Alex Altair and I published a post on the applications of chaos theory, which found a few successes but mostly overhyped dead ends. Luckily the comments came through, providing me with an entirely different type of application: knowing you can’t, and explaining to your boss that you can’t.

Knowing you can’t

Calling a system chaotic rules out many solutions and tools, which can save you time and money in dead ends not traveled. I knew this, but also knew that you could never be 100% certain a physical system was chaotic, as opposed to misunderstood.

However, you can know the equations behind proposed solutions, and trust that reality is unlikely to be simpler[1] than the idealized math. This means that if the equations necessary for your proposed solution could be used to solve the 3-body problem, you don’t have a solution. 

[[1] I’m hedging a little because sometimes reality’s complications make the math harder but the ultimate solution easier. E.g. friction makes movement harder to predict but gives you terminal velocity.]

I had a great conversation with trebuchet and math enthusiast Hastings Greer about how this dynamic plays out with trebuchets.

Transcript

Note that this was recorded in Skype with standard headphones, so the recording leaves something to be desired. I think it’s worth it for the trebuchet software visuals starting at 07:00

My favorite parts:

  • If a trebuchet requires you to solve the double pendulum problem (a classic example of a chaotic system) in order to aim, it is not a competition-winning trebuchet.  ETA 9/22: Hastings corrects this to “If a simulating a trebuchet requires solving the double pendulum problem over many error-doublings, it is not a competition-winning trebuchet”
  • Trebuchet design was solved 15-20 years ago; it’s all implementation details now. This did not require modern levels of tech, just modern nerds with free time. 
  • The winning design was used by the Syrians during Arab Spring, which everyone involved feels ambivalent about. 
  • The national pumpkin throwing competition has been snuffed out by insurance issues, but local competitions remain. 
  • Learning about trebuchet modeling software. 

Explaining you can’t

One reason to doubt chaos theory’s usefulness is that we don’t need fancy theories to tell us something is impossible. Impossibility tends to make itself obvious.

But some people refuse to accept an impossibility, and some of those people are managers. Might those people accept “it’s impossible because of chaos theory” where they wouldn’t accept “it’s impossible because look at it”?

As a test of this hypothesis, I made a Twitter poll asking engineers-as-in-builds-things if they had tried to explain a project’s impossibility to chaos, and if it had worked. The final results were:

  • 36 respondents who were engineers of the relevant type
    • This is probably an overestimate. One respondee replied later that he selected this option incorrectly, and I suspect that was a common mistake. I haven’t attempted to correct for it as the exact percentage is not a crux for me.
  • 6 engineers who’d used chaos theory to explain to their boss why something was impossible.
  • 5 engineers who’d tried this explanation and succeeded.
  • 1 engineer who tried this explanation and failed.

5/36 is by no means common, but it’s not zero either, and it seems like it usually works. My guess is that usage is concentrated in a few subfields, making chaos even more useful than it looks. My sample size isn’t high enough to trust the specific percentages, but as an existence proof I’m quite satisfied. 

Conclusion

Chaos provides value both by telling certain engineers where not to look for solutions to their problems, and by getting their bosses off their back about it. That’s a significant value add, but short of what I was hoping for when I started looking into Chaos.