Prior assumptions

Frequentist statistics are what you learned in school if you’ve only had a little bit of statistics.  You have a null hypothesis, and you attempt to gather data to prove it’s not true.  You do this by showing the odds that the data you collect would occur if the null hypothesis was true.  E.g. the odds of a fair coin coming up heads 10 times out of 10 is 1/(2^10) = 0.0009765625 (this is known as the p value).  Generally we consider the null hypothesis disproven if the odds of the results under the null hypothesis are <= 0.05, so you could safely conclude that this coin was not fair.  Similarly if you have four schools that draw from similar populations and have similar teachers, and one scores three orders of magnitude better than the others on standardized tests, a frequentist will conclude that that school is a lot better than the others.
Bayesian statistics on the other hand asserts that we know something about the likely distribution before we collect the data, and it is good to use that data.  In that school example, a bayesian might say “well, these schools are very similar, so that difference probably mostly represents a statistical fluke.”  To me, who was raised in a frequentist household, this looks like an exercise is assuming what you’re trying to prove.  But I think I finally see its value.

The other day I posted a link to this blog post on facebook.  The post reported on a study that showed a large difference in how law partners reacted to the same paper when told it was written by a black or white student- they wrote more negative comments, were less encouraging, and found more typos.  A friend of mine responded with “I believe there is systemic racism, but…”, which is never a great start.  He went on to make a bunch of criticisms which would have been  extremely fair to make in a journal club discussing a peer reviewed study, but in context came across as reinforcing the cultural pattern of “I refuse to believe in racism is a factor until it’s proven in each specific case.” Forcing victims of systemic discrimination to prove the discrimination gives them an additional burden on top of all that discrimination.  It’s not right and it’s not fair.  It’s also heavily intertwined with the idea that falsely believing something was affected by race is worse than ignoring discrimination.  

At the same time, I dislike the argument “you’re technically correct ” (which he absolutely was- the study was done by some consulting firm and didn’t even include statistics beyond the mean) “but don’t say it because it will encourage racists.”  Nothing good comes from quieting science.

What I eventually realized was that what we really needed was to argue our bayesian priors.  His frequentist criticisms were implicitly asserting a null hypothesis of “no racism”, while I trying to include the that we knew racism existed and shoulder consider that when evaluating the study- e.g. I wanted to do a Bayesian analysis with “some racism” as a prior.  Once we said this we could have a sensible argument about bayesian priors.  We didn’t, because by the time I thought of this we’d already reached the point where the conversation needed to end.  But we could have.
So now I see the value of Bayesianism.  Frequestism is much harder to manipulate, but that also makes it less sensitive to reality.  You make your best guess at which technique works best for your particular question, and then you accept that involving numbers does not magically make you unbiased.