Data inanities

I feel like half the time I read something from one of these data news explainer sites, I want to blog about how silly it is. So while I’ve been wrestling with what to write here regarding a series of terrible NYT op-eds (no, I won’t link to them, but you know which ones I’m talking about), I will take a minute to call out 538 for publishing this article complaining that giving students free lunch is going to make data analysis difficult.

It’s absolutely true that students receiving free lunches is a proxy for student poverty. In fact, in my own teaching, we talk about proxy variables by examining a data set of school characteristics and students achievement scores. We actually run regressions where I encourage students to think about socioeconomic status and poverty through school lunch programs (along with other measures). But it’s also a rather coarse measure. In the way that school lunch programs have traditionally been applied, if you fail to meet some income threshold, you get free lunch, and in some cases, free breakfast. In Colorado, for a family of four, it’s $44,123. While it’s useful for looking at broad categories, it doesn’t tell you anything about the heterogeneity within those categories. The number of kids qualifying for free lunch could be the same at two different schools, but if one school is in a relatively homogenous district with most families hovering around the cutoff point and the other pulls from one very rich area and one very poor area, looking at those schools as the same actually “muddies” the waters” more than diluting the program.

So, it’s not actually a great measure, anyway, which we’ve kind of already covered by calling it a “proxy.” So why not look for better measures? The article mentions education levels of parents; that’s a good one. Or economic variables of the surrounding districts could work. Property values, for instance, are widely available and could be linked to school district. This is a little more work perhaps, because often these variables aren’t automatically linked to school quality data.

It’s true; we don’t like change. And changing a commonly used measure of poverty means looking for new answers, and that trends over time will be a bit difficult to determine for awhile, but with a little hard work and ingenuity, the new answers should be better. Decrying the end of a poor measure of socioeconomic status when its expansion will actually help a lot of kids at the margin is just not very useful. Why not spend a little more time thinking about how we can make data better, answer questions more fully, and ultimately improve school experiences for kids?

Advertisement

DV is (in all likelihood) not lower among NFL players

This past week, Benjamin Morris of Vox published an article claiming to show that NFL players are not nearly as violent to their significant others one might think given the rash of disheartening news lately. Using crime data, he attempts to show how arrest rates for domestic violence among NFL players are lower than his comparison group.

Morris takes arrest records from the NFL and compares them to arrest records for 24-29 year old men. This is the first problem with his analysis. He finds that the average age of an NFL player is 27-29, and so claims the relevant comparison group is 24-29 year old men, but it’s not. The average age of an NFL player may be 27-29, but there is a much wider distribution of ages among NFL players than 24-29. Severe physical domestic violence, like many types of crime, is highest among young men and drops off in older age groups. This is a well-documented phenomenon for violent crime, though I’d argue less well understood regarding domestic violence. So while there may not be many 38-year olds in the NFL, comparing them to 24-29-year olds is inherently a problem and biases him away from finding similar rates to the national average.

So why not take just the abuse by 24-29 year olds in the NFL? That likely would lead to some sample size issues, but perhaps it would be better? Not really. Even if we accept his comparison group on the basis of age, it has other issues.

That NFL players are public figures and wealthy makes them less likely to be arrested for (at least) three reasons. One is that the incentives are aligned such that victims will be less likely to call the police.* The potential for significant media attention on your private life is a huge deterrent for victims who are often hiding the abuse from even family and friends. Secondly, also regarding the incentives of the victim, the financial losses from an NFL player being suspended or expelled are huge, both in absolute terms and relative to career earnings. If you miss two games of a 40-game career, that’s significant. A financially dependent significant other also suffers if that happens, one, financially, but also in the case that the abuser elevates the abuse as a punishment for help-seeking.** Third, I’d guess that a lot of police officers are football fans and police officers in many places have discretion in whether to arrest someone. Some don’t, obviously, there are mandatory arrest laws in many places, though variably enforced, which we can talk about those some other time, but in all likelihood, some discretion. But barring any good evidence, I’d venture to guess that for a given 911 domestic violence call, your average 24-29 year old is more likely to be arrested than your average NFL player. And for your given domestic violence incident, significant others of NFL players are less likely to call 911 than your average victim. Again, biases the arrest rate of NFL players downward and away from the national average.

So maybe your comparison group should be other wealthy, public figures. Income and prestige clearly play a role here that is being ignored when you compare arrest rates in the general population to a small, elite group of athletes. Compare them to basketball players or baseball players or best yet, compare them to football players who got cut. Free research idea: check the rosters of NFL players who were cut and see how often they get arrested for domestic violence. That would probably give a better picture of what the arrest rate would look like for NFL players in the absence of the prestige and income issues. But again, you can’t really compare the groups because the income/fame issues are salient.

There’s certainly a possibility that rates of DV are actually lower, even controlling for all of these issues. I won’t deny that it’s possible that NFL players are less likely to be abusers than other young men. They are public figures, and so one might think they pay a greater cost from behaving badly, that social strictures might govern their behavior. But history tells us otherwise: Recall Ben Roethlisberger’s return to football, others speculating that Ray Rice might return as well, media outlets checking to “see how Ray was doing” after Roger Goodell imposed a suspension from the league, the legions of female (!!) fans decked out in Ray Rice gear at the next Ravens game, etc. Social costs don’t look very high to me, and up to now, when the NFL is revising its policy on DV, financial costs have been limited as well.

They also might be different somehow from other young men. Perhaps the dedication and determination needed to succeed in the NFL makes you somehow less violent. It’s one explanation for Morris’ data conclusions, though one that doesn’t hold a lot of water in my view. They could also be different in ways that make them more violent; it’s not really clear.

In any case, lower arrest rates don’t mean lower prevalence rates. Wrong comparison group, wrong metric, wrong conclusions.

And finally, reading an article about crime and domestic violence by a man who spends time in the article admitting to knowing nothing about crime statistics is just absurd. You’re a journalist. It’s your job to ask someone who does know. There are any number of experts and papers that could have helped you to do a better job, even with the bad data. You would totally fail my econometrics class.

Some extra notes:

* Victims are well aware of the possible consequences of calling the police. While some incidents are public and police involvement is unavoidable, most incidents happen in relative privacy and a victim decides whether to involve the police. Reporting rates for domestic violence are astoundingly low and many victims don’t want to involve the police. In cases where they do want to involve the police, many hope that they’ll just help him to cool off a bit; they don’t actually want action taken against him.

** Many victims are financially dependent on their abusers and calling the police might mean they are unable to provide for themselves or their children for a short time (if he’s held in jail for the day, perhaps) or a longer time (if he is incarcerated or she decides to leave). Abusers physically and emotionally control victims through any number of channels: physical violence, instilling fear if they do certain things, controlling income, preventing them from working, and more. One victim’s story I clearly remember was that how in order to go shopping, she would have to go to the store and write down the prices of everything she wanted to buy; she would have to return home where her husband would tally the prices, calculate sales tax, and give her exactly that much money for her to go back to the store and make her purchases. Her husband would check the receipts when she came home to make sure she didn’t keep any money for herself. I’ve talked to women who spent years collecting pennies from the couch and stole dimes out of their husbands’ pockets to collect enough money to leave. These examples may seem extreme, but they’re not all that uncommon. Financial dependence is a real barrier to women leaving violent relationships and calling the police.

You can imagine how this compounds when short-lived high incomes are involved. If your partner is in the NFL and your calling the cops means he misses two games of a 70-game career, that’s a lot of money, both in absolute terms and relative to his expected lifetime earnings. So, if you take away the abuser’s income, you also take away the victim’s livelihood, which means victims might be less likely to call the police when the financial stakes are higher. While the censure coming from players and the media of domestic abusers in the NFL is laudable, I worry that a new policy, one in which players receive 6-game or even longer suspensions, may actually reduce reporting for this group.

The impact of rainfall, directly

As a development and labor economist, it’s unusual to see colleagues concerned with the impact of rainfall, full stop, on anything. We’ve become so accustomed to seeing rainfall used as an instrumental variable, a pathway to causal results, rather than a driver of some effect in and of itself. A new working paper by David Levine and Dean Yang (gated), however, looks at rainfall itself, or rather deviations from mean rainfall levels, which is actually pretty important. If we’re going to use rainfall as an instrument, or think of it as an exogenous shock that can be modeled linearly (or non-linearly, but modeled nonetheless), then it’s a good idea to make sure those assumptiosn actually hold.

Abstract here:

We estimate the impact of weather variation on agricultural output in Indonesia by examining the impact of local rainfall shocks on rice output at the district level. Our analysis makes use of local meteorological data on rainfall in combination with government administrative data on district-level rice output in the 1990s. We find that deviations from mean local rainfall are positively associated with district-level rice output. 10% higher rainfall leads metric tons of rice output to be 0.4% higher on average. The impact of rainfall on rice output occurs contemporaneously (in the same calendar year), rather than with a lag. These results suggest that researchers should be justified in interpreting higher rainfall as a positive contemporaneous shock to local economic conditions in Indonesia.

The second life of RCTs and implications for child development

In the last few weeks, I’ve come upon two research programs (each with a few related papers) that utilize a combination of an RCT or phased-in intervention and follow-up data 7-10 years on to examine new research questions. They both happen to be focused on the lasting effects of childhood health and wellbeing initiatives, but I doubt that this trend will be confined to child health and literacy. Barham, Macours and Maluccio have a few papers (gated) that use the phasing in of a conditional cash transfer program in Nicaragua to test later childhood cognitive and non-cognitive outcomes, distinguishing effects by timing of the intervention. A working paper out last week shows that deworming programs in Uganda not only increased short-term anthropomorphic outcomes, but also contributed to children’s numeracy and literacy several years later.

In short, we’re seeing more evidence that these early health and wellbeing interventions can have profound impacts not just on the immediate outcomes–Under-5 mortaility, school attendance, etc–but also on future outcomes. I think it’s a neat use of experimental design to examine questions we might not have thought about when the programs were first put in place.

Central Limit Theorem in action

I had my #lafecon213 students run Monte Carlo simulations in class yesterday using a program we wrote in Stata. After we’d done the general one, I told them to change something about it and see how it affected the sampling distribution of the coefficient estimates. One student decided to run 100,000 repetitions of the simulation, not realizing what a time suck it would be. It took most of the rest of my lecture (surprisingly long, now that I think about it; perhaps I should complain to IT? I just tried it, pretty sure he did 1,000,000 repetitions), but when he finally had a histogram, I put it up on the big screens in my awesome smart classroom, broke into a huge grin and exclaimed, “isn’t it pretty?!”

If they didn’t think I was crazy before, they definitely do now. It took at least three minutes for them to stop laughing at me.

You have to admit it’s really pretty, no?

Monte Carlo reps output

Another seminar, another live-tweeting session

We’ve been having lots of seminars here at Lafayette this month. It’s been super fun to read my students’ tweets as they go along, so here again, I’ve storified last week’s seminar for you all, this time by Michael Clark of Trinity College. This time, we were lucky to have my colleague Chris Ruebeck tweeting alongside the students. I think he enjoyed it, too.

Parsing Cost-Benefit Analysis

The more time I spend working on development economics, the more I see the tools of cost-benefit analysis applied to all sorts of regulatory and programmatic concerns. Undertaking a CBA is a skill that was not heavily emphasized in my undergraduate or graduate programs, though we were certainly introduced to it, but seems to be more and more important.

Cass Sunstein, a professor at Harvard, has a piece in the Columbia Law Review on his experience performing and evaluating cost benefit analyses for the US government. As someone knee-deep in a cost benefit analysis with myriad complications, it’s nice to see that the important and interesting questions are hard for everyone else, too.

The abstract is here, but all 46 pages are worth a read (there’s lots of footnotes, so it goes fast if you ignore them):

Some of the most interesting discussions of cost-benefit analysis focus on exceptionally difficult problems, including catastrophic scenarios, “fat tails,” extreme uncertainty, intergenerational equity, and discounting over long time horizons. As it operates in the actual world of government practice, however, cost-benefit analysis usually does not need to explore the hardest questions, and when it does so, it tends to enlist standardized methods and tools (often captured in public documents that are binding within the executive branch). It is useful to approach cost-benefit analysis not in the abstract but from the bottom up, that is, by anchoring the discussion in specific scenarios involving tradeoffs and valuations. In order to provide an understanding of how cost-benefit analysis actually works, thirty-six stylized scenarios are presented here, alongside an exploration of how they might be handled in practice. A recurring theme is the importance of authoritative documents, which may be altered only after some kind of formal process, one that reflects a form of “government by discussion.” Open issues, including the proper treatment of nonquantifiable values, are also discussed.

Preliminary Regression Presentations

I am more than halfway through my seventh time teaching something called Econometrics or Quantitative Research Methods or Applied Statistics or whatever you’d like to call it. I’ve taught it now at a few different levels, each requiring more or less work, more or less writing, more or less math, more or less me being totally overwhelmed by grading.

This semester, I am not having students blog. Instead, we’re taking more in class time to discuss readings. The other big change I made is that instead of having students turn in their preliminary regression results last week, I had them make five-minute presentations to the class. It was an experiment and it was one of those experiments that made me feel like a teaching God. I can’t recommend it enough (based on my sample of 18 students, but only 1 cluster (class)). Presentations are great because you can grade them as you go, but additionally, they allowed each student in the course to learn from the others’ missteps. I had students fill out peer evaluations (anonymously) so they get additional feedback than just mine and they have to really understand what they’re doing in order to present it to the class.

I think they enjoyed it, too!

On being careful

Early on in my graduate career, a professor hired me to do some data cleaning on a set of historical data she and a coauthor had collected. Eventually, my data analysis and stata skills became more useful than my data cleaning skills and at some point, she asked me to perform some sort of regression or matching analysis. I did it sort of slap-dash and sent it out, returning later to find at least one big mistake. Though I presented the corrected version to her in a meeting later, she had already seen the incorrect version and begun to make changes to the paper to fall in line with it. What followed was a 30 minute lecture on how I needed to be careful, how she couldn’t write me a good recommendation if I wasn’t careful, how specific employers wouldn’t want me if I wasn’t careful.

It is a conversation I’ve relived several times throughout my still very new career as a PhD economist, admonishing myself to be careful and diligent in all my work, but more than ever in the past week or two as the Reinhart-Rogoff Excel error uncovered by a UMass-Amherst grad student has come to light. It hasn’t gone well for them and according to some, may even be changing the debate on austerity in politics.

While I understand the excitement of finding something big, it seems that the bigger a deal this paper was to be, the more careful they would have been. I once asked Robert Barro whether he thought people went easy on him because he held so much sway. Not at all, he told me, if anything, they’re harder on me.

And we should be, hard on each other that is. We should demand transparency and replication, and not just by chance in some random graduate classroom. If evidenced by nothing else than the number of “you’re an economist, aren’t you used to being wrong?” jokes I heard this weekend, we need to be more careful.

Correlation is not Causation, clearly

Repeat after me:

Internet Explorer vs Murder Rate Will Be Your Favorite Chart Today

We discussed causation and correlation in my Methods class this morning. I generally use the ice cream sales and murder rates example, but since this has been floating around the internet lately, I figured I would throw it in. It got a few chuckles out my class, from those who also wanted to insist that ice cream made people deranged and thus more likely to murder someone, but a good reminder nonetheless. A regression of murder rates on ice cream sales or internet explorer market share will have a positive and statistically significant coefficient estimate, but it doesn’t mean that either is causing more murders to occur.

Source

%d bloggers like this: