I feel like half the time I read something from one of these data news explainer sites, I want to blog about how silly it is. So while I’ve been wrestling with what to write here regarding a series of terrible NYT op-eds (no, I won’t link to them, but you know which ones I’m talking about), I will take a minute to call out 538 for publishing this article complaining that giving students free lunch is going to make data analysis difficult.
It’s absolutely true that students receiving free lunches is a proxy for student poverty. In fact, in my own teaching, we talk about proxy variables by examining a data set of school characteristics and students achievement scores. We actually run regressions where I encourage students to think about socioeconomic status and poverty through school lunch programs (along with other measures). But it’s also a rather coarse measure. In the way that school lunch programs have traditionally been applied, if you fail to meet some income threshold, you get free lunch, and in some cases, free breakfast. In Colorado, for a family of four, it’s $44,123. While it’s useful for looking at broad categories, it doesn’t tell you anything about the heterogeneity within those categories. The number of kids qualifying for free lunch could be the same at two different schools, but if one school is in a relatively homogenous district with most families hovering around the cutoff point and the other pulls from one very rich area and one very poor area, looking at those schools as the same actually “muddies” the waters” more than diluting the program.
So, it’s not actually a great measure, anyway, which we’ve kind of already covered by calling it a “proxy.” So why not look for better measures? The article mentions education levels of parents; that’s a good one. Or economic variables of the surrounding districts could work. Property values, for instance, are widely available and could be linked to school district. This is a little more work perhaps, because often these variables aren’t automatically linked to school quality data.
It’s true; we don’t like change. And changing a commonly used measure of poverty means looking for new answers, and that trends over time will be a bit difficult to determine for awhile, but with a little hard work and ingenuity, the new answers should be better. Decrying the end of a poor measure of socioeconomic status when its expansion will actually help a lot of kids at the margin is just not very useful. Why not spend a little more time thinking about how we can make data better, answer questions more fully, and ultimately improve school experiences for kids?