Preliminary Regression Presentations

I am more than halfway through my seventh time teaching something called Econometrics or Quantitative Research Methods or Applied Statistics or whatever you’d like to call it. I’ve taught it now at a few different levels, each requiring more or less work, more or less writing, more or less math, more or less me being totally overwhelmed by grading.

This semester, I am not having students blog. Instead, we’re taking more in class time to discuss readings. The other big change I made is that instead of having students turn in their preliminary regression results last week, I had them make five-minute presentations to the class. It was an experiment and it was one of those experiments that made me feel like a teaching God. I can’t recommend it enough (based on my sample of 18 students, but only 1 cluster (class)). Presentations are great because you can grade them as you go, but additionally, they allowed each student in the course to learn from the others’ missteps. I had students fill out peer evaluations (anonymously) so they get additional feedback than just mine and they have to really understand what they’re doing in order to present it to the class.

I think they enjoyed it, too!

Advertisements

On being careful

Early on in my graduate career, a professor hired me to do some data cleaning on a set of historical data she and a coauthor had collected. Eventually, my data analysis and stata skills became more useful than my data cleaning skills and at some point, she asked me to perform some sort of regression or matching analysis. I did it sort of slap-dash and sent it out, returning later to find at least one big mistake. Though I presented the corrected version to her in a meeting later, she had already seen the incorrect version and begun to make changes to the paper to fall in line with it. What followed was a 30 minute lecture on how I needed to be careful, how she couldn’t write me a good recommendation if I wasn’t careful, how specific employers wouldn’t want me if I wasn’t careful.

It is a conversation I’ve relived several times throughout my still very new career as a PhD economist, admonishing myself to be careful and diligent in all my work, but more than ever in the past week or two as the Reinhart-Rogoff Excel error uncovered by a UMass-Amherst grad student has come to light. It hasn’t gone well for them and according to some, may even be changing the debate on austerity in politics.

While I understand the excitement of finding something big, it seems that the bigger a deal this paper was to be, the more careful they would have been. I once asked Robert Barro whether he thought people went easy on him because he held so much sway. Not at all, he told me, if anything, they’re harder on me.

And we should be, hard on each other that is. We should demand transparency and replication, and not just by chance in some random graduate classroom. If evidenced by nothing else than the number of “you’re an economist, aren’t you used to being wrong?” jokes I heard this weekend, we need to be more careful.

Correlation is not Causation, clearly

Repeat after me:

Internet Explorer vs Murder Rate Will Be Your Favorite Chart Today

We discussed causation and correlation in my Methods class this morning. I generally use the ice cream sales and murder rates example, but since this has been floating around the internet lately, I figured I would throw it in. It got a few chuckles out my class, from those who also wanted to insist that ice cream made people deranged and thus more likely to murder someone, but a good reminder nonetheless. A regression of murder rates on ice cream sales or internet explorer market share will have a positive and statistically significant coefficient estimate, but it doesn’t mean that either is causing more murders to occur.

Source

Gender norms, roles, unequal pay, and heterogeneous effects

The Economist has a nice summary of a new paper by Marianne Bertrand, Emir Kamenica, and Jessica Pan, which is forthcoming. An excerpt of the Economist article is below.

The paper offers some hints as to why women who could outearn their husbands choose not to work at all, or to work less. For instance, norms affect the division of household chores, but economically in the wrong direction. If a husband earns less than his wife, she might rightfully expect him to take on some additional responsibilities at home. In reality, however, if she earns more, she spends more time taking care of the household and their children than otherwise similar women in comparable families, who earn less than the husband. One wonders whether such women feel compelled to soothe their husbands’ unease at earning less.

I’m in the midst of reading the paper right now, and my first thought was that this is an incredible stretch. In econometrics, a significant problem in estimation is the problem of unobserved heterogeneity. It makes sense to think that on average, married women are different than single women, that women who choose to have children are different than women who choose not to have children, and finally, it should makes sense that men who marry women who earn more than them are likely different than men who marry women who earn less than them.

I can certainly imagine that some women would be inclined to “soothe their husbands’ unease at earning less,” but it seems that the men who were particularly sensitive to such things wouldn’t marry a woman with greater income or greater earning potential. This is, in fact, what they find, that women who work are less likely to marry a man who earns less, and thus partially explains the decline in marriage rates in the US. It also drives much of their results on divorce, which they see as arising out of the unequal division of labor in the household due to this “soothing effect.”

It appears to be a very thorough paper, though I’m skeptical of the instrument–men’s and women’s industry-specific wage distributions–being uncorrelated with unobserved characteristics that lead to more gender-equitable matches.

Based on the industry composition of the state and industry-wide wage growth at the national level, we create sex-specific predicted distributions of local wages that result from aggregate labor demand that is plausably [sic] uncorrelated with characteristics of men and women in a particular marriage market.

This is the instrument used by Aizer (2010) in her paper on the effect of an increase in women’s wages on rates of domestic violence. Though a subtle distinction, I find her use of the instrument much more plausible due to the much lower prevalence of hospitalization-inducing violent events versus marriages where the woman earns more, which the Bertrand paper cites as about one quarter of the marriages in their sample. It seems that these wage distributions actually would be correlated with the characteristics of men and women in a labor/marriage market.

The unit of analysis

Bill Easterly put a quote on his non-blog yesterday from a Jane Jacobs book, Cities and the Wealth of Nations, (now almost 30 years old) on the unit of analysis in development questions. It makes a case for considering other units of analysis than the nation.

Nations are political and military entities… But it doesn’t necessarily follow from this that they are also the basic, salient entities of economic life or that they are particularly useful for probing the mysteries of economic structure, the reasons for rise and decline of wealth.

As a labor economist, I’m kind of surprised that it’s still an issue, but it seems necessary to reiterate even 30 years after Jacobs brought it up in her book. Though Easterly and Jacobs were talking about wealth and economics in particular, I think the insight is relevant for all kinds of decision making, and especially important when we’re talking about social norms (yes, I’m on a social norms kick–it doesn’t help that a friend told me last night that all my research was boring except for the social norms stuff. I’m here all night, folks).

At the risk of sounding like an echo, I was a bit taken aback last week how many of the people at the conference wanted to talk about scaling up to national level, how to effect change at a national level, and how to measure national-level social norms (some confusion around the term, here), even while admitting how watered down programs get at that level and how difficult it is to generalize across countries. Research suggests that reform and program implementation at that level are not very compatible with leveraging social norms for behavioral change due to lack of identification with the relevant social group (the nation).

Striking a balance in data collection

A big part of my research time is spent on violence against women, gender-based violence, domestic violence, and harmful traditional practices. Though sometimes all whipped into a category of “women’s issues,” I’ve argued before that these are problems that everyone should care about, that they exert severe effects on our health and well-being as a society, emotionally, physically and economically.

Currently, I’m mired in two data collection projects, both with various degrees of hopelessness. I’ll write more later about my time in Caracas, but suffice it to say for now that there simply isn’t data available on issues like the ones I mention above. Or if it is available, no one’s going to give it to me. No surveys, no police data, no statistics on hotline use, nothing. We don’t know anything.

Conversely, in a meta-analysis of programs for adolescent girls that I’m writing with a colleague, my coauthor came upon a study suggesting that in order to correctly assess prevalence of Female Genital Mutilation (FGM) we should submit randomly selected female villagers in rural areas to physical exams.

I was shocked and disgusted when she sent me the study. I don’t doubt for a minute that the most accurate way to gauge prevalence of FGM is to randomly select women and examine them, but seriously? I am astounded that no one thought through the psychological consequences of women who have already been victims of gender-based violence being examined by a foreigner who thinks they are lying about whether they’ve been cut.

These days, it’s a good reminder for me that in collecting data there is such a thing as too much, and such a thing as not enough. It’s all about striking a balance.

Big data and what it means for economists

Over the past few days, a couple of pieces have come out about Big Data, or rather how economists and other social scientists are incorporating the extremely large datasets that are being collected on every one of us at every minute. Justin Wolfers, at the Big Think, says “whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.” Expanding on Wolfers, Brett Keller speculates as to whether economists will “win” the quant race and “become more empirical.” Marc Bellemare thinks (in a piece that’s older, but still relevant) that the social sciences will start to converge in their methods, with more qualitative fields adopting mathematical formalism to take advantage of how much we know about people’s lives. Justin Wolfers and Betsey Stevenson go on in a related piece at Bloomberg about the boon that big data is for economics.

Not withstanding the significant hurdles to storing and using large datasets over time (ask a data librarian today about information that’s on floppy disks or best read by a Windows XP machine. Heck, look at your own files over the past ten years: can you get all the data you want from them? What would it take to get it all in a place and format you could formally analyze it?), I find the focus on data a little short sighted. And don’t get me wrong; I love data.

Wolfers and Stevenson think that the mere existence of data should change our models, that the purpose of theory nowadays should be “to make sense of the vast, sprawling and unstructured terabytes on our hard drives.” We do have the capability to leverage big data to gain a more accurate picture of the world in which we live, but there is also the very real possibility of getting bogged down in minutiae that comes from knowing every decision a person ever makes and extrapolating its effect on the rest of their lives. It’s the butterfly flaps its wings effect, for every bite of cereal you take, for every cross word your mother said to you, for every time you considered buying those purple suede shoes and stopped yourself–or didn’t. I’m being a bit melodramatic, of course, but it’s very easy, as an economist, as a graduate student, as a pre-tenure professor short on time, to let the data drive the questions you ask. It’s also often useful, I’m not saying that finding answerable questions using existing data is universally bad, by any means. But if we have tons of information on minutiae, we’ll probably ask tons of questions on minutiae, which I don’t think brings us any closer to understanding much of anything about human behavior.

On the convergence side, I worry about losing things like the ethnography. It may not be my strong point, but it’s useful, its methods and ouput informed my own work, and if convergence and big data mean anthropologists start relying solely on econometrics and statistics and formal mathematics, we’ll lose a lot of richness in our history and academics. I’m all for interdisciplinary work, for applying an economic lens to all facets of human interaction and decisions, but I don’t think our way of thinking should supplant another field’s. Rather, it should complement it.

Finally, incorporating big data into models that already exist will mediate some problems (unobserved heterogeneity that can now be observed, for example), but not all. Controlling linearly for now observable characteristics in a regression model has plenty of downsides, which I won’t enumerate, but can be found in any basic explanation of econometrics or simple linear regression.

Similarly, our tools for causal identification keep getting knocked down. At one time, regression discontinuity design was hot, and smacked down. Propensity score matching was genius and then, not so much. Instrumental variables still has this rather pesky problem that we can’t actually prove one of its key components. It’s not to say these tools don’t have value. When implemented correctly, they can indeed point us to novel and interesting insights about human behavior. And we certainly should continue to use the tools we have and find better ways to implement them, but the existence of big data shouldn’t mean we throw more data at these same models, which we know to be flawed, and hope that we can figure out the world. If we’re indeed moving towards more empirical economics (which is truthfully the part I practice and am most familiar with), we still need better tools. The models, the theory, the strategies for identification have to keep evolving.

Big data is part of the solution, but it can’t be the only solution.