Gender-based variation in grading and teacher attitudes.

Jezebel (via NYT): “Girls Outscore Boys on Math Tests, Unless Teachers See Their Names”
New York Times:  “How Elementary School Teachers’ Biases Can Discourage Girls From Math and Science”
Study Abstract:  “We’re going to skip explaining how we proved gender bias and just talk about its effects”
Actual Study (no public link): “Young Israeli girls outscore boys on anonymously graded national math exams but receive lower classroom grades, but eventually begin to underscore them in national exams as well.  The size of the discrepancy in scores is positively correlated with discrepancy in teacher attitude reported by boys and girls.  This pattern does not hold for English or Hebrew.”

I went in to reading this study pretty guns blazing, but it actually looks quite well done and robust.  You could argue that the teachers and tests are evaluating different things and the teachers’ goals are not necessarily worse, but

  1. Stereotypically, girls are better at pleasing teachers than boys.  And that is in fact the pattern we see in Hebrew and English.
  2. Low-biased teacher grades was correlated with a decrease in performance among girls in later grades (beyond that that would be predicted by low grades alone). The best case scenario is that the teachers are spotting some hidden weakness in the girls that the lower grade tests didn’t cover.  Except…
  3. Grade bias was positively correlated with negative student reports of the teachers attitude, and specifically discrepancies in the attitude reported by girls and boys.

So the actual study is pretty impressive, and astonishingly so for being in the field of education.    Touche, Lavy and Sand.  I also found it interesting that bias against girls was strongly correlated with the socioeconomic status of girls in the class as a whole, but not with any individual girl’s SES.  E.g. having a poor girl from a large family with uneducated parents lowered the grades of other girls in the same class, regardless of their own status, which suggests all kinds of unpleasant things.

The popular reporting on this paper is less impressive.  Jezebel flat out lies, implying that the same test was graded blindly and with the name (but no other data) available, which led to 100 comments asking how math grades could even vary that much, and 100 other comments saying “partial credit for showing work”.  The New York Times isn’t quite so egregious but does describe the input as “The students were given two exams, one graded by outsiders who did not know their identities and another by teachers who knew their names.”  That’s technically true, but implies that the two exams were much more similar than they actually were.   I expect this kind of crap from Jezebel, but the New York Times shouldn’t have to sensationalize results that are already this interseting.

%d bloggers like this: