Things I Say a Lot in Crisis Chat: You Are Worthy of Help Too

I talk to a lot of people in crisis chat who feel bad taking up my time, or are reluctant to seek treatment from a professional, or would pay for help but are reluctant to accept free help, because there are so many people out there with more serious problems.  How serious their problem is varies: sometimes it really is a mild problem, sometimes it is years of horrendous abuse that is still technically not the worst thing a human being has ever experienced in the history of time.

The most useful response I’ve found is: “We treat people with sprained ankles even though there are other people with broken bones.  Be honest about your situation and trust the doctor/therapist/charity to prioritize their resources appropriately.”  Nothing works all the time, but I can’t think of a time it didn’t at least help.

Anticholinergic agents and dementia

A new study came out this week suggesting use of a particular class of drug after age 65 was associated with dementia.  Here’s what you need to know.*

The study is retrospective, meaning it took people who developed the disease of interest and then looked backwards at their medications.  Retrospective studies are prone to a number of problems, the biggest one being that even young people with healthy memories are crap at giving you their drug history over the past 10 years, and this is a study of people with dementia.  The researchers dodged this by using an HMO database of the subjects complete medical history, which is a neat trick.  The second problem is that retrospective studies can easily end up being painting the bulls-eye after they’ve fired the arrow.  Mere chance dictates that if you track enough traits, any random subset of a population is likely to have something more in common with each other than with the rest of the population.  If you use the traditional bar of statistical significance (5% chance of results arising by chance), checking 20 traits gives you an expected value of 1 false positive.  To be fair, this study has a much higher significance level, and the effect was dose dependent, which is a very good sign that it’s legit.  The authors heavily imply they deliberately studied anticholinergics rather than shotguning it, but without preregistration there’s no way to be sure.

Anticholinergics come in two forms: antimuscarinics, and antinicotinic.  Short version: these work on different types of neuroreceptors, which live in different parts of the body and do different things . Every example drug they give is an antimuscarinic and of the classes of drugs they list, many have no antinicotinic members.  Even if they technically included antinicotinics in the analysis, they would be such a small portion of the sample that their effect could be overwhelmed.  So I don’t think you can apply this study to drugs like bupropion, which is an antinicotinic.

I don’t like the way they calculated total exposure at all.  Essentially they counted the normally recommended dose of any medication as One Standardized Daily Dose.  But those dosages vary wildly (even the examples they give span an order of magnitude), as do the particular drugs’ ability to cross the blood-brain barrier.  The drugs are prescribed for a huge variety of causes, and what’s sufficient to stop incontinence has nothing to do with what’s sufficient to slow Parkinson’s.  This oversight may cancel out with the fact that they created buckets of dosages rather than do a proper linear regression, in the sense that low-def pictures cancels out bad skin.

The obvious question is “but maybe the same thing that drove people to need anticholinergics increases the likelihood of dementia?”  This study has a much better retort for that than most, which is that anticholinergics were prescribed for a variety of causes, and it’s unlikely they all correlate with dementia.  I find that explanation extremely satisfying, except that they only evaluated the drugs as a single unit.  Antidepressants make of over 60% of the total SDDs taken.  The next most common is antihistamines at 17%.  But since more than 60% of the population took at least one SDD, it seems likely that those were taken intermittently, as opposed to the constant drip of antidepressants.  This leaves open the possibility that the entirety of the effect they attributed to anticholinergics was in fact caused by tricyclic antidepressants alone- and that the real culprit was depression.  The obvious controls were to evaluate the anticholinergics separately, and to compare rates of dementia among TCA treated patients with those treated with other antidepressants.

The subtler version of this question is “what if anticholinergics prolong life, giving you more time to develop dementia?”  I don’t see anything where they checked for that either way.  They did ask for people’s perception of their own health, and that was negative correlated with TSDD, but if TSDD is correlated with depression it’s hard to know how to interpret that.

For all those criticisms, this is an amazingly strong result for a medical study**.   No one study can prove anything (even if i think they had the data to do more than they did).  It definitely merits further investigation (ideally some with animal models, so we can do the causality experiments that would be super unethical in humans), and maybe even behavior change in the meantime, although a lot of the drugs studied are already obsolete or second line.  Plus it another piece of data that will help us figure out how to fight dementia, and that makes me really hopeful.

*Read: here’s what I learned.

**Yes, this should worry you.

Loratadine for Allergies?

The Decision Tree casually describes loratadine (brand name: Claritin) as barely better than placebo for treating allergies.  This is news to me because Claritin was absolutely critical to me graduating middle school.  If I forgot to take it in the morning my mom had to drop it off at school by lunch.  Without it I slept 16 hours a day,* woken up only by hives that itched so intensely they burned.  This isn’t actually relevant to me now because my allergies were taken care of my unprocessed honey and moving, but I couldn’t believe something once so important was essentially a sugar pill. So I investigated.

First stop, Wikipedia, which definitely backed my claim that Claritin treated sneezing, runny nose, itchy or burning eyes, hives, and other skin allergies.  But of 19 citations, 5 were unavailable to me (either they were books or in languages I don’t read), 13 were on topics other than clinical efficacy (e.g. side effects or mechanism), and 1 had a sample size of 192 and was a comparison against another anti-histamine, with no placebo or no-treatment group.

So I checked google scholar, where I found numerous minuscule studies (n = 14, 7 treatment groups) in which loratadine was better than placebo but worse than other drugs in the same class.**  If that’s true, why did loratadine get so much more attention?  I looked up the other drugs, and it turns out that some of them (cetirizine/Zyrtec) had similar efficacy but came out later, and went over the counter later as well.  Others (Terfenadine/Seldane) had much uglier side effect profiles (e.g. cardiac arrythmia if you eat a grapefruit).  So Claritin’s advantage seems to be being the first drug to market that treated the problem with minimal side effects.  I also wonder if Decision Tree‘s author (Thomas Goetz) was looking at a particular symptom set?  For example, loratadine appears to do well as a treatment for hives but there are better options for hay fever.

Some people suggest that having multiple drugs with similar response rates in the same class on the market is some sort of failure.  They are wrong and they should feel wrong.  First, these drugs were developed in parallel by different companies. While all the ones we heard of worked out, very few chemicals that pharma companies research become prescribable drugs, and they can’t predict which ones will do so ahead of time.  What if McNeil stopped researching Zyrtec because Bayer was researching Claritin, and Claritin made you grow arms out of your face?  We’d have lost years of allergy relief.  Second, the fact that they had similar average efficacy and side effects doesn’t mean they have the same effect in every person.  People are squishy and they don’t make sense, and differing reactions to drugs is one of the milder ways this manifests.

*No, fatigue is not a normal symptom of allergies, but I got it most springs and it went away with anti-histamines, which is good enough for a field diagnosis of allergies.

**I also found a lot of studies detailing the effects of loratadine in conjunction with another drug, mostly montelukast, and abstracts that reported loratadine’s efficacy relative to older antihistamines but without absolute numbers.

Crisis Chat Observations: “You’re Very Aware”

One of the frustrating things about depression (and other mental illnesses, but I spend the most time talking to depressives) is that… well, actually there’s a lot of frustrating things.  One is that finding good medical professionals is hard, finding good mental health professionals is harder because personality fit is more important, and depression takes out exactly the systems you would use to seek and evaluate treatment.   Even if you have no other obstacles (financial, social, transportation…), it is still really hard to find a medic taking new patients, make an appointment, keep an appointment, and follow up on what the provider tells you to do.

But the frustrating thing about depression I was thinking of when I started this post is that even when you do all of those things, treatment can take a while to work.  Typical protocol is to give an anti-depressant six weeks to work, and the first one may do nothing, or have intolerable side effects.  The STAR*D protocol study, which tested an algorithm for finding anti-depressants that worked for individual patients, found a 70% success rate over four months- excluding the 42% who dropped out.  Therapy can take years, and there’s often a painful period before it starts to help.*  Some people I talk to at crisis chat need help getting into treatment.  Others are doing everything I could possibly recommend to them- psychiatrist, therapy, social support, a list of self-care activities of which crisis chat is neither the first or last on the list- and are still miserable.

At least for the teenagers**, the most helpful thing I have found to say in this situation is the truth: you are doing everything right, and it is deeply unfair that it takes so much time to bear fruit.  Crisis chat is deliberately not an affirmation on demand service because generic cheerleading is emotionally draining for volunteers and even if they specifically request it, visitors tend to reject it as insincere- but if I see something someone is doing that will be long term helpful to them, or that they are especially good at, I will tell them.  I don’t give the same ones every time and I don’t make up things to make people feel better, I only say something if I see a genuine skill. This isn’t cheerleading or attempting to logic them out of depression so much as it is giving an objective, informed eye to people who know their brain is unreliable reporters but don’t know what specifically it is lying about.

I thought of this while reading Brute Reason’s Case for Strength Based Diagnosis.  Mental health treatment right now is all about the things you are bad at.  The strongest counter force is the pop culture romanticization of depression and bipolar disorder, which is not helping. But I could see it as very helpful to hear “the same gene that contributes to your depression also contributes to your high intelligence, and you can use that intelligence to fight depression”.  This seems like another problem caused by trying to use the same system for “accurately describe patient state to patient”, “accurately describe patient state to another practitioner” and “tell insurance why they should give you money.”

*When I’m detailing treatment options to crisis chat visitors I often make a point of mentioning CBT as something that isn’t a drug, works fairly quickly and doesn’t involve dwelling on pain.

**Adults tend to be more pessimistic, and I have no way of knowing if it’s because they’ve actually been depressed for 20 years straight or because their liar brain is telling them so.

Screen Bedtime Follow Up

A month ago I decided to start turning off all my screens (TV, computer, phone) at midnight.  It was a smashing success.  Within a week I’d moved to 11 PM, and I’m toying with 10.  Some if it is undoubtedly the red/yellow light effect and cutting down on stimulation before bed.  Coincidentally finding books that were interesting enough to read but not upsetting enough to disrupt bed time was also helpful.  But the single biggest effect I’m consciously aware of is giving me something to succeed at late at night.

Setting a real bed time never worked for me because if I wasn’t asleep by then I was failing.  Failing is no fun, and a sense of it inhibits sleep.  But not using screens is an action I choose.  And then I am succeeding at my goals, which is an excellent feeling to get to sleep with.  Plus apparently what I do when I can’t use screens but am not tired enough to get into bed is clean, and I am slowly undoing the damage done by six months of post-surgical fatigue.  Now when I wake up my apartment is slightly cleaner than I remember it.  This is an excellent way to wake up.

Downer Superbowl Post

The Seahawks made the Superbowl again, which means half the windows in town have a green and blue 12 in them.*  I have two instinctive reactions to this: reflexive nerd disgust feeding an urge to signal how little I care about sportsbowl, and happiness that people that live near me are winning a thing.  The reflexive nerd disgust is not coming from a healthy place and happiness is nice, so I start to go with that.  And then I remember the actual problem with football.

Football is hell on the body and the brain.  Watching people hurt themselves for our entertainment isn’t great, but they are at least adults that made the choice that this was worth the compensation.  What really bothers me is the tens of thousands of children (many poor or otherwise underprivileged) who are destroying themselves in the hope of financial reward.  Even very young children can damage each other.  I think anything that encourages this is immoral.  At the same time, I know people who enjoy football and I’m extremely uncomfortable calling them immoral.  They’re not watching with an eye towards torturing 8 year olds, they just don’t know.  And my feelings on the matter are so strong there’s no polite way for me to tell them.

2000 years ago people fed slaves to lions for entertainment.  100 years ago they watched men punch each other in the face.  Football needs to go the same way.

*It also means green and blue cupcakes in the crisis chat break room.

Review: The Decision Tree (by Thomas Goetz)

My trail of discovery to The Decision Tree was as follows:

  • Discover Iodine’s in-browser medical translator, become fan for life.
  • Watch Iodine CEO’s (Thomas Goetz) TED talk on the problems with how medical information is currently presented, and his solution.  Become very impressed.
  • Discover Goetz has a whole book on this stuff.  Order from library.

This was maybe not have been the best order to do it in.  Decision Tree is really, really good, but it lacks the specificity of the TED talk or Iodine’s recent work.  If I’d read it first, the other work (which was produced later)  would have been fulfilling the promise of the book.  But reading it last, I kept waiting for the other shoe to drop.  It is an amazing launching pad, but I went in expecting to see what had landed.

That may not even be fair.  Goetz points to a lot of specific things, like the Quantified Self movement, PatientsLikeMe.com, and actual research on how Dr. Internet affects people.  It’s just that none of these so singularly improve the signal to noise ratio the way Goetz’s work on presentation of test metrics did.  I guess what I’m saying is you should watch that TED talk.

Now that I’m over the fact that Decision Tree is not a 250 page TED talk, I can appreciate it for what it is, which is a reasonable 101 text on the concept of individuals monitoring and improving their own health.  It doesn’t give many specifics for either of those because the answers are so specific and so personal, but it does leave the reader better prepared to evaluate possible solutions they find.   That’s actually pretty hard to do, and really useful.  I could also see it as useful for medical professionals who are on the fence about patient-driven care.*  It is extremely helpful in explaining why over-testing is so dangerous, while respecting individuals’ right to data.  And if you’re not reading it during or immediately after a painful, stressful medical procedure, it’s actually a pretty light read. So if this book looks interesting to you I’d recommend it.

*Goetz is unreservedly pro- patient led decision making and research.  I am too, until I remember a lot of the anti-vaxxers have put an enormous amount of research into their idiotic, dangerous, anti-social position.  I don’t know how to preserve the rights of me + my friends to know our own data and correct our doctors’ mistakes while preserving the rights of children to not die of entirely preventable diseases.

Animal Rights Deep Dive Pre-Check

I haven’t written a ton about animal rights/animal suffering because any position I have is guaranteed to get me yelled at by two sides, possibly more.  I will only write things like that when I am absolutely certain of my grasp of the facts and the rigor of my thought process.  That does not describe me and animal rights at all.  My opinions on balancing animal rights with human needs/desires can best be described as “intuitions attempting to balance to several different gut feelings.”  But that is hopefully about to change.  John Salvatier, some other people, and I are going to dig in to Animal Charity Evaluators’s research on the best way to alleviate animal suffering.  This doesn’t actually require me to investigate my beliefs about the health impact of eating animal products, but I probably will anyway.   In the spirit of science and accountability, I’m going to share my starting beliefs (like I did with HAES), so you can see if research changed them.

A note on comments: this is a pretty scary thing to write, because I’ve seen so many personal attacks in animal rights threads in many different Effective Altruism forums.  If you have a pointer to information I would benefit from, please send it along, I would really appreciate it.  If you think my beliefs are immoral, please hold off commenting until the Post-Check, which will contain only opinions I am willing to defend.  If you believe that there are no trade offs or your trade off is the only moral trade off, please go share this opinion with people who agree with you.

Okay, that said, here is my existing knowledge: I watched Earthling and Farm to Fridge with my EA group.  John has already read a few studies and passed links and comments on to me, I skimmed some of the studies he linked to while I was tired.  I have read a few EA facebook threads on animal rights that had minimal informational content, relative to the emotional vitriol.  Without further adieu, here are my current opinions:

Animal death for the purpose of food is okay, animal suffering is not.

Everyone dies eventually.  A good life and a clean death is more than animals get in the wild.  Ecosystems without predators are very unhealthy for the remaining prey animals.  So while unnecessary suffering bothers me a great deal, death seems not to.  This is pretty close to my attitude with humans; I’m frequently angry at how the medical system focuses on postponing death rather than improving health/quality of life.

Modern factory farming produces unacceptable levels of suffering

Even if everything I saw in Farm To Fridge was outliers, the implied bell curve is unacceptable.

Animal death or suffering for the purpose of clothing is not okay

I didn’t so much reason this out as found myself in a shoe store trying to talk myself into leather being okay, and realized it would be much easier to just not buy leather.  I am not entirely convinced I will stick to this if I find something amazing that can only be had in leather, but I am definitely willing to put a great deal of energy into finding vegan alternatives.  This leads me to believe…

My position that animal death for the purpose of food is morally okay is dependent on my belief that eating animals is essential to human health

This is a weird position for me because I didn’t eat meat until I was 28, because I couldn’t digest it, which 4 year old me translated to “it’s gross”.  I was the least bothered of anyone when we watched Earthling and Farm to Fridge, and I believe that’s in part because for everyone else they were learning something horrible about something they enjoyed.  My thought process was more along the lines of “Of course meat is disgusting, but you have to grit through it for your health.  Gastric acid pills will solve a lot of this problem.”  My forebrain knows HCl does not actually have anything to do with pigs eating necrotic flesh off of other pigs, but the hindbrain worked so hard to overcome it’s visceral disgust that the new reason to find meat disgusting just bounced off.

I’m not claiming people will literally die without meat.  I do think that the healthiest diets involve small amounts of meat, and any deviation from that platonic diet is a blow to your health.  If you are otherwise healthy and health has thresholds, that blow may not make a perceptible different in your life.  If you are me, it does.  To the extent healthy vegan diets are possible, they will generally be some combination of less delicious, more expensive, or more work than the omnivore alternative.

This doesn’t mean meat is some sort of magic salve.  My gut feeling is that a even really bad vegan diet is probably better for you than a really bad American-style meat-based diet, although this will depend somewhat on genetics.

Not all meats have equal moral density

I have almost-but-not-quite given up pig (which was the first meat I was able to stomach, because bacon) because pigs are smarter and I think that makes them more capable of suffering.  Meanwhile crickets barely rank above plants (and may end up being more humane, depending on how many bugs and rodents die to produce those plants).  All this is strictly from a suffering perspective: if you want to consider environmental impact things get even more complicated.

I prefer Mercy for Animals’s approach (lessening the amount of suffering in meat production) to The Humane League’s approach (convincing people to go veg*n)

Some of this is because I was coming at it from the framing of meat-offsets (donating to a charity to balance out meat consumption).  Originally I framed it as “paying someone not to do something you just did is stupid”, like I do with carbon offsets.  It also galls me that what you’re paying for is not making it easier for someone to veg*n, via cooking classes or covering the difference in cost, you’re paying to convince them that veg*nism is a good idea.  Being inspired to convince people to do something by doing the exact opposite feels incredibly broken and toxic to me, but I could never articulate it more than that.

As I’m writing this I see that this is actually tied in with my justification that meat (or at least animal products) are necessary for health.  “This is necessary for my health so I’ll pay someone else to sabotage their health” is sick and immoral.  “This is necessary for my health but I’m going to work to make others suffer for it as little as possible” seems much more reasonable.

I do think that convincing people to eat much less animal protein is a good idea, and I’d support efforts to change norms around meat and lessen the cost/effort/taste differential between vegan and meat meals.

Also leafletting is dumb

Seriously, I just don’t see it helping.  They say leafletting but according to John they actually mean canvassing with leaflets.  My understanding from PIRG is that the vast majority of money raised by canvassers goes to paying the salaries of the canvassers.*  Humane League isn’t trying to raise money, but “convincing people to do a lot of work to avoid something they see as a staple” seems like a strictly harder pitch than “give me $10 and I will go away.”

But if it’s going to work anywhere, it will be at colleges

College students are much more open to new ideas, and cafeterias lessen or even eliminate the work to avoid meat.

But I don’t think we’ll ever know the absolute effectiveness because it’s really hard to measure

Unless they’re actually following people (without telling them) and charting what they eat, how could they possibly know?  And spying on people is expensive and possibly illegal.

Wait, I just thought of a way to measure it.  College students (especially freshman, who are often segregated from other students) eat at college cafeterias.  You could total measure consumption of meat vs. vegan items and see if it changes after leafletting.

*Whether or not particular canvassers are paid or are volunteers is mostly irrelevant, because their time still has value.

The Kitten Pain Scale

I very briefly flirted with Quantified Self and then jumped off the bandwagon because it was making my personal signal:noise ratio worse.  But my neuroendodontist* has given me several drugs, and he wants to know how they work.  Allow me to give you a brief list of things that make measuring this difficult

  • Treatments are all on varying schedules- some daily, some daily with a build up in blood stream leading to cumulative effects, some as needed to treat acute pain, some on my own schedule but hopefully having longer running effects.  Some are topical and some are systemic.
  • I have several home treatments like tea and castor oil.  I’m not going to not take them in order to get more accurate assessments of the drugs, both because ow and because pain begets pain.
  • Taking treatments as needed + regression to the mean = overestimate of efficacy.
  • Pain is affected by a lot of non drug things: sleep, stress, temperature, how ambitious I got with food, amount of talking, number of times cat stepped on my face in the night, etc.
  • We are hoping some of these drugs will work by disrupting negative feedback loops (e.g. pain -> muscle tension -> pain), which means the effect could last days past when I take in.  In the particular case of doxepin it might have semi-permanent effects.
  • Or I could develop a tolerance to a drug and my response to a particular drug will attenuate.  That is in fact one reason I was given so many choices as to medication: to let me rotate them.
  • We have no idea how these drugs will interact with each other in me.  We barely have an idea how the interact in people in general.
  • If I believe something will help my pain will lessen as soon as I take it, long before it could actually be effective.  Not because I’m irrational, but because my brain reinforces the self-care with endorphins, which lessen pain.
  • At the same time, having more pain than I expected to feels worse than the exact same pain level if it was anticipated.
  • Side effects: also a thing.

“I think I feel better when I take this one” was not going to cut it.

Then there was the question of how to measure pain.  Ignoring the inherent subjectivity of pain, neuralgia is a weird beast.  I already hate the 1-10 pain scale because pain has threshold effects and is exponential.  I could create a single pain number at the end of the day, but my pain is not constant: it spikes and recedes, sometimes for reasons, sometimes not.  What I would ideally like to track is area under the curve of pain**, but that requires polling, which would create horrible observer effects.  If I ask myself if I’m in pain every 15 minutes, I will increase my total pain level.  I could poll less often, but the spikes are random and short enough that this was not going to be accurate enough to evaluate the treatments.  I could count pain spikes, but that ignores duration.  Determining duration requires polling, so we’re back where we started.  I could deliberately poke a sore spot and see how bad the resulting pain is, but

  1. Ow
  2. A treatment that doesn’t affect sensitivity but does keep me from spontaneously feeling pain because the nerve is bored is a success.  If we wanted me to be numb we would do that.

It’s just really hard to measure something when your goal is for it to be unnoticeable, and measuring it creates it.

So I came at it from the other side.  What happens when pain is unnoticeable?  I enjoy life more and I get more things done.  Could I measure that?  Probably.  They have the bonus of being what I actually care about- if something left me technically in pain but it no longer affected my ability to enjoy or accomplish things, that would be a huge success.  If something took away the pain but left me miserable or asleep, it is not solving my actual problem.**

So one metric is “how much I get done in a day”.  Initially this will be the first number between 1 and 10 that I think of when I ask the question at the end of the day, but I’m hoping to develop a more rigorous metric later.  You’d think enjoyment of life couldn’t ever be rigorously measured, since it’s so heavily influenced by what is available to me in a given day, but I say that brave men can make it so.  And so I introduce to you: the kitten pain scale.  Kitten videos vary a little in quality, but I think my enjoyment of any single video reflects my internal state more than it does the video. Three times a day (shortly after waking up, shortly before screen bed time, and sometime mid-day that can vary with my schedule but must be selected ahead of time to avoid biasing the data), I will watch a cute kittens video and record how much I enjoy it.  The less pain I am in the more I should enjoy the video.  This will give me a (relatively) standardized measure of pain without risking inducing it.

This is still not what you would call a rigorous study.  An individual choosing what to take among known options never will be.  But I seriously think the kitten pain scale could be a contender to replace the stupid frowny faces.  My first draft is available here.  Right now it’s set to measure over the course of a day, because that’s the scale I expect from these meds, but you can add bonus measurements at set times after taking meds if you like.

Possible additions: cups of tea drunk in day.  Right now that seems like too much work to measure, but when tea is available it’s a pretty good indicator of how much pain I’m in.

*I am still angry that I know what that is, much refer to one using possessive case.  But given that, I am extremely grateful I live within biking distance of a world class research facility in the discipline.  Even if the physical facility could be a case study in how economic insulation leads to bad user experience.

**This is why none of my treatment options are opioids.  Strong ones technically reduce pain, but they also leave me miserable.  The fact that some people take them for fun is all the proof of human variability I could ever need.