Reading The Remedy, or really anything about the time after formalized western medicine but before the germ theory of disease, is an exercise in terror or frustration. How could anyone think attending a childbirth with autopsy gunk on your hands was a good idea? Or leaches. Who looked at those and said “I’ll bet those will make people healthier”?
My first reaction reading The Colony, about a Hawaiian leper colony founded shortly after the germ theory became entrenched, was “oh no doctors, you overapplied the lesson.” Leprosy has an epidemiology a lot like tuberuclosis: long periods between infection and symptoms, and an ease of spreading that means everyone is constantly exposed to it. This makes it look like an inborn condition, not a contagion. Leprosy and TB are actually pretty closely related too. I assumed that doctors looked at their failure with TB and overcorrected. It didn’t work because only a small fraction of people are suspectible, and (it’s implied although never stated outright) they will be exposed to it whether symptomatic patients are quarantined or not.
Then I remembered that shunning lepers* predates germ theory by a couple of thousand years. Ancient and medieval people were completely capable of identifying disease as contagious and instituting a separation. So why didn’t industrial-age doctors?
Then I remembered that while the peasantry considered it obvious that disease was contagious and should be shunned, they considered it equally obvious that leprosy was punishment from God for sin and the black plague could be avoided by killing Satan’s minions, the cats. Nobody talks about all the things everyone knew that doctors correctly disbelieved in.
Without a lot of proof, I strongly suspect that doctors signaled intellectual rigor and membership in the medical class by disbelieving things the peasantry believed. Believing things the peasantry does believe doesn’t signal either of those things even if the belief is correct. No one gets credit for believing eating food is good and eating Belladonna is bad. If you’re not very careful in that environment, it’s easy for peasants’ belief in something to become evidence against it.
This is similar to the process of the toxoplasma of rage, in which people signal membership in an ingroup by loudly believing its most dubious claims. I also highly suspect it’s what’s going on with dietary constraint and toxins. It is obviously true that what you eat matters, some things you put in your body will damage your cells, getting rid of them is good, and there are things you can take to get rid of them. It’s called heavy metal poisoning and chelation. Or if you’re Huey the dopamine dog, chocolate and activated charcoal. But dietary constraints and belief that specific things were bad for you got associated with special snowflakenes, so you can signal intellectual rigor by dismissing them. This despite the fact that nutrition obviously makes a difference in your health, that humans vary across many dimensions and there’s no reason to assume they wouldn’t vary across digestion and nutritional needs. Likewise things we put in our mouth obvious have the capacity to hurt us and there’s no reason to assume we have an exhaustive list of those, or that they’re identical across all humans.
In D&D terms: people are advertising their will save bonus by how credible an idea they can disbelieve. No one wants to be this guy:
[Thor rushes Loki, only to run through the illusion and trap himself in the cage]
Disbelieving everything is an easy way to be right most the vast majority of the time. For every correct idea that’s an almost infinite number of wrong ones, and even those that are true are incomplete (see: physics, Newtonian). But if everyone disbelieves everything, we will never discover anything new.
I’m not in a position to criticize anyone for being frustrated at people for being wrong. I lived that life for a long time. But I try to counter it now by remembering that humans aren’t really capable of distinguishing “laughably wrong” from “correct, and world changing” without investing a lot of energy. If there aren’t negative externalities and they’re not asking anything from me, their investment in their crackpot idea is something like an insurance policy for me, or a lottery ticket. Most won’t pay off, but when they do I’ll be glad they were there.
“Minimal negative externalities” and “at no cost to me” are important caveats. Children need vaccinations, and I don’t want the government paying for medicinal prayer. But if a functional, taxpaying citizen wants to spend their own money to get their chakras realigned every six months? Yelling at them seems like a waste of energy. Hell, they may have a genetic variation that enhances the placebo effect to the point it is medically significant. The human brain is weird and we don’t even know what all the pieces are, much less how they work. If someone investigates something that’s a positive for me, even if all they do is conclusively prove it doesn’t work.
You can believe people are wrong, you don’t have to accept all ideas as equally valid. But what I would suggest, and what I’m attempting to do myself, is to make the amount of energy you put into your disbelief proportional to the harm the idea causes, not its wrongness. To have wrong ideas drop out of sight, resurfacing only if they cause problems or turn out to be a winning lottery ticket. I think that on net this leads to a better world, and in the meantime I’m calmer and less annoyed.
*Which really means shunning anyone with skin discoloration, ancient people not being entirely up on their bacteriology.