Intro to EA/Giving What I Should

Update 11/19/14: I had the format of the pledge wrong.  Read Jonathan’s correction here, more comments on the bottom of this post.

People often ask me what EA is.  I tried describing it as “trying to make charity as effective as possible”, but that’s kind of implies that everyone not in EA is not doing that.  Like evidence based medicine, it’s either obviously correct or horribly mislabled.  I can say “we believe in randomized control trials”, but a lot of what I do in the local group is push for everything except RCTs.  And my favorite part of GiveWell is not their research into existing charities, although that is excellent for specific problems, but their deliberate seed funding of projects to find the best way to approach unsolved problems.  That they picked something I’m passionate about (criminal justice) is a bonus, but the principle would stand either way.  So I think that I will describe EA, or at least my interest in EA, as “generating and advertising the evidence for evidence based charity.”

Recently my EA group talked to Jonathan Courtney from Giving What We Can.  Giving What We Can has two functions: assessing charities, and taking and monitoring pledges individuals make to give 10% of their income.  On charity assesment, they’re basically Pepsi to GiveWell’s Coke.  They tend to agree with each other’s research but make slightly different recommendations based on differences in their beliefs about the future.*  GWWC also encourages people to register a pledge to donate 10% of their lifetime pre-tax income to what they (the pledger) believes to be the most effective charities for helping developing countries.   The pledge is not legally binding, and deliberately refers to lifetime income and not income in a given year (so you can consumption smooth), but they do ask people to log their giving, and perform audits of pledgers at the end of the year.

My EA group had a really great discussion about this, and my tentative opinion is:  it’s hard to fault them for what they’re doing, but I sure hope they’re an incremental step. GWWC’s main selling point, simplicity, is also an enormous limitation.

GWWC’s main goal is to head off decision paralysis by giving you a simple number.  A subset of this is giving people who feel equally guilty/anxious about retaining 2% and 45% of their earnings because even 2% is better than living in the Democratic Republic of the Congo, but really don’t want to live on 2% of their income so default to giving nothing.  Solving that problem is not insubstantial, and I give them credit for that.

The downside is that 10% is unlikely to be the best number for everyone.  If you’re childless, in perfect health, and earn $5 million a year for 40 years and have no extenuating circumstances, I think you should give more than 10%.  If you take a 50% paycut to work for a good cause**, I think you get to count all of it.  How does volunteering count?  How is that changed by whether it’s Effective Volunteering or Personal Satisfaction Volunteering?  What if you’re receiving a ton of charitable and government aid for your disabled child?

On the other side of it, I worry about the emphasis on money.  Lots of things require mass action that can’t be bought- like the Ferguson protests, or lobbying for net neutrality.  Western society has a personal connection deficit, and one of my big concerns with EA as a whole is that it commodifies altruism and in doing worsens the connection deficit.

Lastly, there is fear.  I have been out of work for five months due to dental work, and it could easily be another two months before I can start even part time work.  I was originally told my (astonishing) disability insurance (that I’m incredibly lucky to have) would cover at most a week of of that time, because “seriously, no one gets that much time for that small a problem”.  I eventually prevailed***- last week.  That’s 4.5 months without a paycheck, plus the immense cost of the dental and medical care I’ve received.  If I hadn’t had the money to wait that out- and to know I’d survive even if I was never paid- I would have had to handle it much differently, and I honestly don’t know how.  Beg from my parents (an option very few people have)?  Drug myself up to the gills so I could show up at the office, at the cost of, at best, a much longer recovery, and at worst never truly getting better?  Debt?  Forgo the physical therapy and IV nutrition, at the cost of, at best, a much longer recovery, and at worst never truly getting better? Even if I never actually had to do these things, just worrying about them would have been a huge tax on me when I had very little to spare.  At a gut level, I see this pledge as a threat to the sense of safety my savings gave me.

Proponents frequently counter with “It’s not legally binding, you can always withdraw.”  But I don’t want to take a pledge on the condition I don’t have to uphold it.  That seems wrong.

What I find a lot more appealing is a private consumption tax.  For every dollar I spend on things, or things excluding certain expenses, or all things after a certain amount of money, I have to donate.  This fits really well with how I donate now, which is often based on a need to restore balance.  I use the library a lot, so I give them some money.  When I got my shiny new job, I found a family on Modest Needs that needed money to move to a better job.  When I got expensive designer antibiotics for SIBO, for which even a diagnosis is a sign of privilege, I donated to a food bank.  After a lot of dental care I donated to families needing dental care on Modest Needs****.  When I’m feeling especially privileged about how my parents supported my education I donate to Treehouse, which is dedicated to giving foster kids the same support I had.  And when I just generally feel rich or need to use up my remaining employer match, I give to GiveDirectly*****.  These sound a lot like indulgences, but indulgences buy off the guilt from things you shouldn’t have done.  I don’t think anyone thinks I shouldn’t have access to the medical care or library books I do, the problem is that other people don’t have them.

These aren’t exactly consumption taxes.  Often what I give is based on what I didn’t have to pay because I have amazing insurance.  Actually, that feels really fair to me.  There’s an overwhelming amount of evidence that being well off is actually cheaper than being poor, in part for exactly the reasons I listed in the fear paragraph.  If my savings (that I was able to accrue due to an incredible amount of privilege) saved me a bunch of credit card debt, paying half of the hypothetical interest on that debt seems pretty fair, and avoids the “I’m punished for being successful.”  I’m not being punished, I’m just not getting to keep all the gains for something that was partially given to me out of luck.

Okay, so some sort of sharing of the benefits of privilege (for when I get things everyone deserves, but many people are denied), generally going to share that specific privilege with others, plus a consumption tax, because living in America is a privilege in ways I will never fully consciously comprehend.  Either a low general consumption tax, or a higher tax on luxuries.  This seems right.  I will need to figure out exact numbers and how I will calculate spending, but that is a practical problem.

*E.g. GiveWell no longer recommends giving to the Against Malaria Foundation because they already have a large stockpile they’re unable to move without lowering their ethical standards, GWWC recommends them because they believe a larger stockpile will serve as an incentive to make partners meet their ethical standards.  GiveWell doesn’t even advise against the AMF, they just believe there are three charities that are better.  Both sides sound plausible, and there’s no way to know who’s right without a control universe.

**And you’re doing it because you believe it’s the best way to help the world, not because it’s a better work environment.  There are EA charities devoted to this question.

***Despite a dentist so incompetent at paperwork I was beginning to suspect malice.

*****Although I haven’t for this round, possibly because none of the previous care actually helped

*****GiveDirectly ends up getting by far the most financial support but the least thought.

Update 11/19/14: it turns out the pledge is 10% every year, the year you earn it, not accumulated over time.  In defiance of all rationality, this makes me feel less anxious about it.  I need to give this more thought and then it probably gets it’s own full entry.

Rededication

When I started this blog, it was intended essentially as prep for a career as a psychiatric NP.  But let’s go further back.  When I was 12 a search for air conditioning at the national zoo led me to dedicate myself to become a behavioral biologist. I worked on the academia-research track for 10 years, until I realized academia was terrible (and also I didn’t get in to grad school).  I’d picked up a CS degree to facilitate the biology, and for lack of something better to do I jumped onto the programming track.  I found parts of it I loved to succeed at, but was never happy long term.*  After another company failed to make me happy, I decided the problem was me and started looking for a new track to jump to, which is how I got to psychiatry.

It’s only 10 months since I started the blog but almost exactly a year since I made that decision.  I spent five of those months not working, recovering from/prepping for dental surgery (and there are a few more months to come).  This was painful and I would have rather never had these problems, but the enforced break did give me some distance and some time to think.  Combined with my volunteer work and reading, this is what I’ve figured out:

  1. I really, really want to improve adolescent mental health.
  2. Psychiatrist, or any other mental health job, or any job at all, has its downsides.
  3. Programming is a rare and valuable skill it would be silly to just throw away.  Plus it really is fun when it works.
  4. I have gotten used to finding programming jobs by throwing a rock and waiting.  There’s a lot of them and I interview well.  But there is no plug and play position that uses my skills to accomplish the things I want to accomplish.
  5. So I will need to make my own.
  6. Beyond mental health, my goals are helping people take care of themselves.  I don’t want to detail a particular vitamin, I want to teach people how to research their own vitamins.  I definitely can’t do individual doctor recommendations, but I can help people evaluate their own doctors.

In parallel to this, I joined the local Effective Altruism discussion group back in April, and within six months rose to power/got conned into doing 1/3 of the organizational work.  I don’t know where I’m going with EA, which as a philosophy applies to anything but as a movement seems to have almost no overlap with the goals I’ve listed above.  EA’s big pushes have been in third world poverty (which I care about, but the only useful thing I have to give them is money), animal suffering (which our meeting made me care about enough to give up factory farmed animal products, but still doesn’t fit as a calling), and existential threats like meteors and malicious AI (which intellectually I think are important but I cannot bring myself to have an emotional response about).   EA is expanding, which is wonderful, but by design they work on a very large scale, and in some ways what I want to do is very small.  And yet, I think it is really important I keep doing it.  Even if it all it provides is a social group that thinks saving the world is a good and achievable, that is really valuable.  And I think it might be more than that.

So my plan for now is to see what I can do with the resources I have.  My primary job is having dental surgery, and that limits my moonlighting options.  But I can read, I write this, I can go to and organize EA events (even if I have to leave my own event early from pain and exhaustion).  I’ve done some work at crisis chat, and there was a brief window in which I was even able to program.*  I’m talking to a local charity that works at the intersection of childhood poverty and education about their best practices, and I’m hoping to turn that into a lesson about how to give when GiveWell doesn’t have the answer.  I have an idea for an Android app that looks pretty achievable but doesn’t exist yet, which I’m excited about.

Long term, I want to find a programming job on a project I care about, and I want to be in a position to design, not just implement.   Between that, EA, and my own projects I’m hopeful something awesome will emerge.  My thinking here is highly influenced by The Economy of Cities, which describes that new industries arise from small incremental changes and combinations in old ones.  I think that can work on a personal scale too.

The main implications for the blog are that video game posts will now be considered on topic, and I will stop feeling vaguely guilty for the low number of hard core medical posts.

*This window opened because my pain level was so much lower after the surgery.  It closed when the surgical incision in my gums failed to heal/my jaw bone started growing out through my gums, which is intensely painful.  But we had a good thing going for a week.

Narrative Dessert Doesn’t Spoil Dinner

Spoilers in media have never bothered me.  I put this down to a preference for Shakespearean tragedies, where knowing the outcome makes it worse, and therefor better.  I also find anxiety about the outcome of a story distracting- the worst of this was when I watched Serenity, and genuinely believed they might all die, mission unaccomplished.  In one sense that was a triumph of story telling, but I found my own anxiety blocked me from empathizing with the characters’ emotions, which is what I actually watch movies for.  Now my pattern is start movie-read plot on wikipedia-finish movie.  One of the funner parts of comic book movies is I can simultaneously read a lot of deep background (from the comic book universe) without knowing exactly what is going to happen with the actual story in front of me.

Apparently I’m not alone.  Mythcreants has a great post pointing to research about how knowing the ending affects enjoyment of a story.   They studied three genres- ironic twist stories (e.g. anything O. Henry ever wrote), mysteries, and grown up literature stories.  On a 10 point scale, subjects reported enjoying the spoiled stories about half a point more, across all three genres.

The problem with this and most psychology studies is that it was done primarily on undergrads at a fairly competitive university, many of whom are taking psychology classes.  Aside from the usual demographics issues, this is also the population cliff notes was invented for.  More generously, college students are reading difficult texts for comprehension all the time, and that particular brain-muscle may be tired.  It seems entirely possible that a factory worker who spends their work day on rote might have more reserves to enjoy the challenge of interpreting text without knowing where the story is going.

Revealed preference evidence is mixed.  TV is full of formulaic sitcoms and reality TV, but the long tail of rich, complex, ambiguous shows grows fatter every year.  What was once a freak thing HBO did to create an artistic backdrop for nipples is now fairly common.  Although “gritty morally ambiguous middle aged white guy” has become its own trope.  I guess the take home message here is that if you think you enjoy spoilers, you are probably right.

*Meanwhile my greatest regret about GRRM not finishing the 6th book is that it means I can no longer google where the TV show is going.

When You Call a Suicide Hotline and Things Go Poorly

For a long time now I’ve wanted to talk about the problematic side of crisis chat.  I sometimes see my co-volunteers badly mishandle things, and it feels dishonest pitching crisis chats as resource without acknowledging the problems.  I held back partly because I would be violating two different people’s privacy if I gave any examples (one of whom is protected by HIPAA), and partly out of fear that I would discourage someone who needed it from reaching out.  But what I realized is that an overly rosy picture gaslights people who have had bad experiences at a hotline.  A lot of our callers believe nothing will ever help, or everything bad that happens is their fault, or a just punishment for their moral failings.  If you tell them the hotline always helps, and they catch a bad call, it reinforces the feelings the feelings driving them to suicide to begin with.

So what you need to know is: if you call a suicide or crisis line and feel worse afterwords, it’s usually their fault, not yours.  Suicide hotline volunteers are human beings with their own set of strength and weaknesses.  Every one of them goes into their shift hoping to make a difference, but on any given day you might catch a trainee, or someone for whom your story is too familiar and triggers projection from their own life, or your story is not familiar enough and they can’t work through the cultural differences.  None of these are reflections of you.

That’s not the only reason people feel worse after calling, of course.  Often the call is the first acknowledgement of how bad things are, after months or even years of numbness. There’s no way to make that not hurt.  But if your call deviates significantly from Hollis Easter’s description of a suicide hotline call, that is a good sign something has gone wrong on the specialist’s end.

What can you do in this situation?  Hanging up is certainly an option, and not one to be discounted.  But if you have the reserves, I encourage you to tell the specialist what is going wrong, as specifically as possible (“No, that’s not what happened”, “No, that’s not how I feel”, “Yes, I’m having thoughts of suicide but I’m not going to act on them in the next 24 hours so can we please talk about my mom”*).  If it works, great.  If it doesn’t, you can leave knowing you tried, and you’ve practiced asserting a boundary.  It’s not what you called for, and you shouldn’t have to do it, but it can be a surprisingly satisfying second best.

Or maybe that won’t make you feel better at all.  That could happen too.  But if it does, I hope that this entry lets you know that it’s not your fault, and leaves open the possibility of trying again later.

*This is a balancing act.  Some callers are genuinely pills-in-the-hand suicidal and we take a fairly aggressive stance at talking them out of it.  A lot of other people are having suicidal thoughts with no intention to act on them, and we want to be a safe place for them to acknowledge those feelings as we talk about the actual problem.  If a particular call is ambiguous, we have to default to suicide intervention.  But if you can promise us you’re not going to kill yourself in the next 24 hours, or even just for the duration of the call, we can relax.

In fact, figuring out what to talk about in general is a tricky problem.  Some people need to talk about the patterns in their life.  Some need to talk about a specific incident.  Some need to talk about feelings without regard to external events.  Some need to be walked through breathing exercises so they can stop chasing their feelings in circles.  If you know which one of these you are, tell us, we are overjoyed to act on the information.

Intelligence vs. Effort, Acknowledgement vs. Praise

Everyone knows you’re supposed to praise children for effort, not intelligence.  Praising intelligence makes them risk averse and fragile in the face of failure, praising effort makes them harder-working and resilient.  How could any caring parent or teacher do anything but change their praise to be 100% efforts-based?

Here are some things that bother me about that framing:
  1. It’s an absolute, rather than relative to our current position.  It’s entirely possible that kids need a mix of both kinds praise, and we’re just swinging the pendulum from too much of one to too much of the other.
  2. As evidence for this, I present the fact that when East Asian kids despair-quit, they often frame it as “I’m not hard working enough”, which I’m not convinced is any better than “I’m not smart enough”
  3. Which brings up the important point that the parallel of “you are smart” is not “you must have worked very hard” but “you are so hard working”, so the comparison is not just intelligence vs. effort but innate characteristic vs. conscious choice.  I have no problem believing those have very different effects on children.
  4. What if the child didn’t work hard?  They either believe you, in which case they won’t understand what is happening when they are faced with something actually requiring hard work, or they won’t, in which case you’re eroding the child’s trust in you in the name of increasing achievement.  Good job science.
  5. More generally, framing everything as a result of effort is gaslighting above- and below- average children alike.  Some children get things faster than other children.  Refusing to acknowledge that could easily be interpreted as being smart or dumb is taboo, which is incredibly destructive on many levels.
  6. One of which is that it denies you and the child information that should inform their schooling.  All kids do need to learn to work hard, and that is better achieved by increasing difficulty until they find something they struggle with, rather than insisting they pretend whatever is in front of them is challenging.  It’s almost cargo-cult.
  7. This has the faint whiff of my mom’s ban on coloring books, because they limited creativity.  It is true that coloring requires less creativity than drawing, but that very factor makes them better for practicing hand-eye coordination.    Given that I’ve always overflowed with creativity but at age 13 got state-funded occupational therapy to make my handwriting legible to myself, I think a little coloring would have been okay.  Hell, there’s a trend right now for adults to color because life is hard and coloring is soothing.  Not everything needs to be about bringing out a child’s potential.
What I would rather see is success at school de-emphasized, all children exposed to a wide variety of activities so they experience being great and terrible at things,  and adults accurately reflect back what they did.  Sometimes that will be “you worked really hard”, sometimes it will be “you were really creative” and sometimes that will be “looks like you got that one really quickly.”  Curriculum would be adjusted so that all kids experience a range of challenges, without the level of activity that challenges them being given any moral weight.

I Swallowed A Bug

Here are the arguments in favor of bug eating:
  1. Relative to traditional meats (chicken, cow, pig, sheep), bugs require many fewer resources. (This and all future comparisons will be done on a per unit edible protein basis, rather than per unit animal weight)
  2. Bugs have more trace nutrients and less fat.
  3. We care less about bug suffering than chordate suffering.  Possibly we don’t care at all.
Here are the arguments against bug eating:
  1. Bugs are gross.
Here is where 28 years of being unable to digest food becomes a super power.  Most food and essentially all protein sources strike me as gross.  So bugs aren’t that much worse than any other source, and I have a lot of practice overcoming disgust in order to eat.
My friend Brian held a bug eating night.  He explains the rationale and practicalities pretty well, so I’ll restrict myself to talking about my personal experience, which can be summed up as “a million times better than I thought it would be.”
For background: I’m trying to train myself to eat meat.  This quarter I’ve taken to cutting off slivers of salmon (for the omega-3s) and more recently duck (which is a wonderful combination of delicious when dead and malicious towards conspecifics while alive, which makes it feel a little more moral) and sauteing them until they’re charred through.  When I say slivers, I mean slivers.  I’ve been working on duck for a week and I eat at most two fingernail-clipping sized bits, prepared and eaten separately.  For salmon I might do as much as 1/2 the volume of my pinky. I have small hands.
I pre-committed to eating at least one cricket, but that was all.  The other bug was supposed to be waxworms, and waxworms are squishy.  I don’t do squishy even when it’s not bugs.  And I was going to be extremely proud of myself for just that one cricket.  Eating a new anything is a big deal for me, and it takes time to adjust.
When the moment came I ate several (along with some HCl pills), and walked away, supremely satisfied in myself for trying a new thing and not freaking out about it.  And then I started getting that itch to eat more, that means the thing in front of me has some trace nutrient I’m short on.    So I did.  And I asked for some to take home.
I got off easy on the waxworms because they were burnt so badly they ended up not serving them.  But there were mealworms.  Mealworms were served as taco fillings, but as it turns out I’d rather eat a bug than a taco (the variation in textures freaks me out).  Mealworms were wetter and more fibrous, so you had to chew them more (although don’t skimp on chewing crickets, catching a leg in your throat feels gross).  The had their own taste, which I didn’t care for at first but could probably grow to be okay with.  I think I like it better than chicken (aka bad tofu) and beef, but not as much as duck or pork, and by pork I mean bacon.
At the end of the night I had a slight stomach ache.  I’d brought HCl but no digestive enzymes, and my stomach was clearly struggling to keep up.  But I get that with all new foods and any significant amount of protein, so I don’t hold it against the bugs.
Some of ease of eating was undoubtedly the environment.  Brian, John, and their blogless roommates have a pre-existing tradition of communal meals that I love, and that makes eating easier.  it was also supremely gratifying to have other people share my attitude that the food in front of us was gross but we were going to eat it anyway.  Constantly being the only one that thinks that gets really lonely.  I flinched a little bit when I went to eat the cricket leftovers this morning.  But then I ate them, and it was fine.  Definitely better than duck, and duck is delicious.
Honestly, the biggest down side is that for all that bugs take many fewer resources than chordate meat, they are currently much more expensive.  One pound of edible cricket is ~$13/pound, which is as much as the grass fed free range humanely cuddled duck I get at the fancy grocery store.  I could probably grow them at home at essentially no cost, since they can live on food waste I would otherwise toss, but I’m not yet committed enough to deal with the noise.  But even at this price I plan on eating more bugs.

Any straw that doesn’t break your back must be weightless.

Toxoplasma gondii is a single-cell parasite usually associated with cat feces, although undercooked meat is the more common form of infection.  For years, everyone knew that T. gondii was totally harmless unless a pregnant woman caught it at a very particular stage in the pregnancy, at which point it caused miscarriage or devastating birth defects.  I probably learned about this younger than most because this was my parents official reason for not letting me have a cat while they were trying to conceive.  But eventually I got my cat and never thought about it again*, because I was not a pregnant woman.  While the concept was gross, 20% of the US and 30-60% of the world has it, so clearly it’s harmless.

Then science began to poke around a bit more.  Toxoplasmosis causes pretty drastic behavior changes in rat, as demonstrated by this adorable video of rats attempting to cuddle a cat…

…which is actually a video of a paramecium attempting to get this cat to eat the rats so it can sexually reproduce in the stomach.  Enjoy that mental image.  If it can have such a strong effect in rats, might it have some measurable effect in humans as well?

Yup.

First, T. gondii was always considered dangerous in immunocompromised individuals (e.g. AIDS patients). But it gets worse. Research revealed associations between T. gondii and lower IQ in children (which may reverse with treatment), suicide attempts, decreased novelty seeking, car accidents,  lower IQ  in men, greater friendliness and sexuality in women , and perhaps 20% of all schizophrenia.**

Here is what I think is going on.  The human body is incredibly robust.  It can take a number of hits and show only a very minor decrease in function.  But if you already have enough hits against you (HIV, age, genetic predisposition to schizophrenia), it can have a big effect.  Or maybe it will do nothing, but it uses up one of your hits, so when the next blow comes, you don’t have the energy to fight it.    This is why the phrase “only dangerous in immunocompromised individuals” bugs me so much.  First, everyone who doesn’t die of trauma lives at the mercy of their immune system.  Second, immune function is not bimodal.  There’s lots of people that don’t have AIDS, but do have, I don’t know, multiple chronic complex infections in their jaw requiring extensive surgery to remove.  Or they’re poor and have substandard housing and nutrition.  Or they pick up a second parasite while camping.

Telling these people- who don’t have AIDS or leukemia, but aren’t functioning at optimal either- that T. gondii, or any other aggravator, can’t affect them is like telling a working-poor person that ATM fees can’t hurt her because she’s not homeless.  It’s great that the fees are a rounding error to you, but don’t discount the cost they impose on others

*Which turned out to be totally justified.  Owning a cat is not a risk factor for toxoplasmosis, and I happen to have been tested as part of a larger parasite screen last year and am certifiably toxoplasmosis free.

**A lot of these studies are associational, which I usually frown upon.  I find it more valid in this case because causational studies in animals show similar effects.

…wait a second

We all know most genetics v. environment* research is done using a mix of monozygotic (identical twins), dizygotic (fraternal) twins , and non-twin siblings, reared apart or together.   The idea was that monozygotic twins share 100% of their DNA, and dizygotic and non-twins shared 50%, so you could tease out the difference between environment and genetics that way.

The first problem was that identical vs. not identicalness was originally assessed based entirely on looks.  But not all genetically identical twins look alike, and not all twins that look alike are genetically identical.  Mislabeling this makes genetics look less influential than they are.

The second problem is that this discounts nine months in utero as an environment, when it is probably the most influential environment you will ever be in.  Some (though not all) studies use dizygotic twins. vs non-twin siblings to measure the affect of a shared uterus, but there’s a lot of confounding variables there.  Worse, 75% of monozygotic twins are monochronic (sharing a placenta), and an exceptional few are monoamniotic (share an amniotic sac) (dizygotic twins never share a placenta or amniotic sac).  Monoamniotic pregnancies are rare and dangerous so we don’t know much about the twins, but monochronic twins are more alike than dichronic-monozygotic twins, despite the fact that sharing a placenta is more like to result in unequal distributions of blood, which can have huge effects.

The third problem is that not-identical -> 50% shared genetics was a reasonable assumption to make in the 1950s, or even the 1980s, but it’s not true. You have a 50% chance of sharing any given chromosome with a full sibling, which means your average relatedness is indeed 50%, but the total percent in common could be anything between 0 and 100**.  With genetic testing as cheap as it is, there’s no excuse not to test study subjects for exact relatedness.

*A stupid framing to begin with

**With complications from crossing over between chromosomes.  The probability math on this is straightforward but the actual calculations are so ugly because it depends on which chromosome crosses over and where.

Depression in video games

Okay, apparently psychology and video games is my niche and I should just accept that.

If you ask most gamers for a game about depression they’d say Depression Quest*, partly because it has depression right in the name and possibly because one of the designers, Zoe Quinn, has been targeted for massive harassment.  DQ is the world’s most morose choose your own adventure novel.  The descriptions of depression and they choices it leaves you are very accurate, but I left the game thinking “Boy, I am good at fighting depression.  Why don’t actual depressed people do as well as I did on this game?”  Which is of course massively unfair, and I assume not what the developers were going for.  I know other people who have liked it a lot, and it’s short and free, so certainly give it a go if you’re at all interested, but I don’t have much to say about it.

And then there is The Cat Lady.

The Cat Lady is a horror game.  If you hate being scared, or don’t want to see violence, sexualized violence, and gore, you should not play it.  I found it well done, artistically merited, and not exploitative, but it is pretty gruesome.

I like horror video games but no genre misses its mark more often.  Many games are never scary.  Of those that are, most rely purely on jump scares, which make me twitchy but not scared- the opposite of what I want.*  The best part of being scared is when it is over.  Of games that are successfully atmospherically scary at first, most are not by the end. You’re too used to the mechanics, you’ve acclimated to the monsters, your brain has noticed none of this is actually happening.  This can ruin the experience.

BEGIN SPOILERS (not scary)

The tempo of The Cat Lady can roughly be described as spooky-creepy-CREEPY-creepy-TERRIFYING-weird-scary-spooky-….and then every scene is less creepy than the one before.  You could call this a failing, in the pattern of many horror games before it.  Or you could call it a brilliant use of the mechanics of a game to induce a particular psychological state in the user,** in this case with the goal of demonstrating the improvement in the main characters psychological state as the game goes on.  The game starts with her suicide.  It ends with her finding her voice, making a friend, and standing up for what she thinks is right.  It felt very organic.  The player is given a lot of choice in Susan’s dialogue.  At the beginning I chose the most withdrawn and passive options, and at the end I chose the most active and courageous ones, because it felt like that’s what the character would do.  The lessening of terror felt like Susan coming into her own.

END SPOILERS

The negatives are mostly mechanical- for an atmospheric narrative game, the lack of autosave is puzzling.  The inability to manually save during dialogue, which can go 15 minutes at a stretch, is unacceptable.  The lack of even quicksave, meaning I must hit three buttons and then type the name of a new save, and do it compulsively because you never know if I’m about to crash or hit another 15 minute unsavable section, would be unforgivable even if the game hadn’t crashed twice at the same spot.***  The game is very talky, and it’s paced badly.  It was a very poor choice to block saves between chapters, and then start every chapter with a bunch of exposition, because it meant I was leaving the game in medeas res, rather than at natural down beats.  The talky bits were sometimes very interesting but sometimes very painful to get through- a lot of plumbing through dialogue trees to get the option you already know you’re going to use.

Would I recommend this to a person who wanted to know what depression felt like?  Only very a specific person.  You’d have to be a horror fan or you’d never get past the second chapter.  And if you don’t naturally get the genre I’m not sure it would have the same effect.  Would I recommend it to a depressed person looking to see their experiences reflected in art?  Same caveats, with possibly a wider net, since depressed people will more naturally get the depression in the beginning.  The writer/designer apparently has personal experience with depression, and it shows.  Would I recommend it to someone who likes horror games?  Yes, definitely, without reservation.  It is so good.

As a side note, I think is another piece of evidence for my evolving hypothesis about women and horror stories.  I don’t what the statistical distribution is because I watch a very nonrandom subset, but in a world where most major movies don’t even pass the Bedschel test, horror films address a lot of “women’s issues”.  Ginger Snaps and Jennifer’s Body are about female competitiveness as they come into sexual power, Mama is about being raised by a mentally ill parent, and Drag Me To Hell is about an eating disorder.  And now The Cat Lady is about depression, and the way depressed middle aged women are treated by society.

*There is a very slight chance they’d say Shadow of the Colossus, which is an excellent game, but any connection to depression is buried deep in metaphor.

MORE SPOILERS

*I discovered something interesting when I played Condemned.  Originally the contrast on my TV was so  bad I couldn’t see enemies (which, for maximum discomfort, are crazed homeless people) until they’d actually attacked me.  This was startling, but not scary at all.  I then upped the contrast so it was theoretically possible for me to see enemies ahead of time, although they were still mostly hidden.  This was much scarier.  It’s like I don’t feel fear unless something is preventable through my own actions.  Ironically the fact that The Cat Lady is a puzzle game, and thus you are never on a clock and can only die when the story says you’re definitely going to die, makes it easier for me to be scared.

**Papers, Please is the only other game I think of that does this.  It takes the mundanity of a lot of casual games and makes it a manifestation of working a soul crushing job.  I was impressed with them too.

***Non-gamers: I know it sounds like I’m overreacting, but I’m not.  Imagine if you had to walk to another room to save your place in a book on every page.

Book Review: The Child Catchers

I’ve used the words “calling” or “purpose” a few times on this blog now.  I’m not Christian, but I was raised in a Christian home in a Christian culture, and my concept of a calling is clearly steeped in that tradition.

So for me, reading The Child Catchers (Kathryn Joyce) was mostly a cautionary tale about letting a Call override the rest of your brain.  Step by step, Joyce takes you through how a large group of people who fervently believed they were doing not only the right thing, but the best thing, the thing they had been called by their God to do, destroyed the lives of countless children and ripped about whole societies.  Some of it came from privilege/White Man’s Burden beliefs, but some of it was just that they had bad or insufficient information.

On a practical level, non-foster-care adoption seems to have the trouble as the pharmaceutical industry: we wanted something (lifesaving medicine, care for abandoned children) but didn’t want to pay for it, so we handed the bill to the deepest pocket around (pharma companies, adoptive parents), and then we got mad when the system inevitably bent towards their point of view.  A lot of the problems in adoption stem from that most systems match a parent with a specific child and then start verifying if the child is available to be adopted.  Or the adoptive parents start picking up the mother’s expenses before birth.  The very impulse that will make these prospective parents good parents- the belief that this is their child– is incredibly destructive at this stage, and the fact that they’re required to invest a lot of money makes it worse.  It inevitably leads people to view searches for biological extended family as obstacles, or pressure a birth mother to “keep her word” and surrender the infant.  Even if they haven’t bonded with that specific child (which I would find worrying), they may not have the money to try again.  That’s just not fair.

Rwanda has chosen a different tactic.  International families go on a waiting list.  The Rwandan government checks all potentially eligible children, which involves looking for biological family who might take in the child and making sure the birth mother wasn’t coerced, or finding an unrelated local family that would like to adopt.  By the time an international adoptive family is contacted, the chances of something going wrong are minuscule.

Callings are important, but they need to be reality checked.  That might be my new Effective Altruism slogan.