Gender norms, roles, unequal pay, and heterogeneous effects

The Economist has a nice summary of a new paper by Marianne Bertrand, Emir Kamenica, and Jessica Pan, which is forthcoming. An excerpt of the Economist article is below.

The paper offers some hints as to why women who could outearn their husbands choose not to work at all, or to work less. For instance, norms affect the division of household chores, but economically in the wrong direction. If a husband earns less than his wife, she might rightfully expect him to take on some additional responsibilities at home. In reality, however, if she earns more, she spends more time taking care of the household and their children than otherwise similar women in comparable families, who earn less than the husband. One wonders whether such women feel compelled to soothe their husbands’ unease at earning less.

I’m in the midst of reading the paper right now, and my first thought was that this is an incredible stretch. In econometrics, a significant problem in estimation is the problem of unobserved heterogeneity. It makes sense to think that on average, married women are different than single women, that women who choose to have children are different than women who choose not to have children, and finally, it should makes sense that men who marry women who earn more than them are likely different than men who marry women who earn less than them.

I can certainly imagine that some women would be inclined to “soothe their husbands’ unease at earning less,” but it seems that the men who were particularly sensitive to such things wouldn’t marry a woman with greater income or greater earning potential. This is, in fact, what they find, that women who work are less likely to marry a man who earns less, and thus partially explains the decline in marriage rates in the US. It also drives much of their results on divorce, which they see as arising out of the unequal division of labor in the household due to this “soothing effect.”

It appears to be a very thorough paper, though I’m skeptical of the instrument–men’s and women’s industry-specific wage distributions–being uncorrelated with unobserved characteristics that lead to more gender-equitable matches.

Based on the industry composition of the state and industry-wide wage growth at the national level, we create sex-specific predicted distributions of local wages that result from aggregate labor demand that is plausably [sic] uncorrelated with characteristics of men and women in a particular marriage market.

This is the instrument used by Aizer (2010) in her paper on the effect of an increase in women’s wages on rates of domestic violence. Though a subtle distinction, I find her use of the instrument much more plausible due to the much lower prevalence of hospitalization-inducing violent events versus marriages where the woman earns more, which the Bertrand paper cites as about one quarter of the marriages in their sample. It seems that these wage distributions actually would be correlated with the characteristics of men and women in a labor/marriage market.

The unit of analysis

Bill Easterly put a quote on his non-blog yesterday from a Jane Jacobs book, Cities and the Wealth of Nations, (now almost 30 years old) on the unit of analysis in development questions. It makes a case for considering other units of analysis than the nation.

Nations are political and military entities… But it doesn’t necessarily follow from this that they are also the basic, salient entities of economic life or that they are particularly useful for probing the mysteries of economic structure, the reasons for rise and decline of wealth.

As a labor economist, I’m kind of surprised that it’s still an issue, but it seems necessary to reiterate even 30 years after Jacobs brought it up in her book. Though Easterly and Jacobs were talking about wealth and economics in particular, I think the insight is relevant for all kinds of decision making, and especially important when we’re talking about social norms (yes, I’m on a social norms kick–it doesn’t help that a friend told me last night that all my research was boring except for the social norms stuff. I’m here all night, folks).

At the risk of sounding like an echo, I was a bit taken aback last week how many of the people at the conference wanted to talk about scaling up to national level, how to effect change at a national level, and how to measure national-level social norms (some confusion around the term, here), even while admitting how watered down programs get at that level and how difficult it is to generalize across countries. Research suggests that reform and program implementation at that level are not very compatible with leveraging social norms for behavioral change due to lack of identification with the relevant social group (the nation).

Striking a balance in data collection

A big part of my research time is spent on violence against women, gender-based violence, domestic violence, and harmful traditional practices. Though sometimes all whipped into a category of “women’s issues,” I’ve argued before that these are problems that everyone should care about, that they exert severe effects on our health and well-being as a society, emotionally, physically and economically.

Currently, I’m mired in two data collection projects, both with various degrees of hopelessness. I’ll write more later about my time in Caracas, but suffice it to say for now that there simply isn’t data available on issues like the ones I mention above. Or if it is available, no one’s going to give it to me. No surveys, no police data, no statistics on hotline use, nothing. We don’t know anything.

Conversely, in a meta-analysis of programs for adolescent girls that I’m writing with a colleague, my coauthor came upon a study suggesting that in order to correctly assess prevalence of Female Genital Mutilation (FGM) we should submit randomly selected female villagers in rural areas to physical exams.

I was shocked and disgusted when she sent me the study. I don’t doubt for a minute that the most accurate way to gauge prevalence of FGM is to randomly select women and examine them, but seriously? I am astounded that no one thought through the psychological consequences of women who have already been victims of gender-based violence being examined by a foreigner who thinks they are lying about whether they’ve been cut.

These days, it’s a good reminder for me that in collecting data there is such a thing as too much, and such a thing as not enough. It’s all about striking a balance.

Big data and what it means for economists

Over the past few days, a couple of pieces have come out about Big Data, or rather how economists and other social scientists are incorporating the extremely large datasets that are being collected on every one of us at every minute. Justin Wolfers, at the Big Think, says “whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.” Expanding on Wolfers, Brett Keller speculates as to whether economists will “win” the quant race and “become more empirical.” Marc Bellemare thinks (in a piece that’s older, but still relevant) that the social sciences will start to converge in their methods, with more qualitative fields adopting mathematical formalism to take advantage of how much we know about people’s lives. Justin Wolfers and Betsey Stevenson go on in a related piece at Bloomberg about the boon that big data is for economics.

Not withstanding the significant hurdles to storing and using large datasets over time (ask a data librarian today about information that’s on floppy disks or best read by a Windows XP machine. Heck, look at your own files over the past ten years: can you get all the data you want from them? What would it take to get it all in a place and format you could formally analyze it?), I find the focus on data a little short sighted. And don’t get me wrong; I love data.

Wolfers and Stevenson think that the mere existence of data should change our models, that the purpose of theory nowadays should be “to make sense of the vast, sprawling and unstructured terabytes on our hard drives.” We do have the capability to leverage big data to gain a more accurate picture of the world in which we live, but there is also the very real possibility of getting bogged down in minutiae that comes from knowing every decision a person ever makes and extrapolating its effect on the rest of their lives. It’s the butterfly flaps its wings effect, for every bite of cereal you take, for every cross word your mother said to you, for every time you considered buying those purple suede shoes and stopped yourself–or didn’t. I’m being a bit melodramatic, of course, but it’s very easy, as an economist, as a graduate student, as a pre-tenure professor short on time, to let the data drive the questions you ask. It’s also often useful, I’m not saying that finding answerable questions using existing data is universally bad, by any means. But if we have tons of information on minutiae, we’ll probably ask tons of questions on minutiae, which I don’t think brings us any closer to understanding much of anything about human behavior.

On the convergence side, I worry about losing things like the ethnography. It may not be my strong point, but it’s useful, its methods and ouput informed my own work, and if convergence and big data mean anthropologists start relying solely on econometrics and statistics and formal mathematics, we’ll lose a lot of richness in our history and academics. I’m all for interdisciplinary work, for applying an economic lens to all facets of human interaction and decisions, but I don’t think our way of thinking should supplant another field’s. Rather, it should complement it.

Finally, incorporating big data into models that already exist will mediate some problems (unobserved heterogeneity that can now be observed, for example), but not all. Controlling linearly for now observable characteristics in a regression model has plenty of downsides, which I won’t enumerate, but can be found in any basic explanation of econometrics or simple linear regression.

Similarly, our tools for causal identification keep getting knocked down. At one time, regression discontinuity design was hot, and smacked down. Propensity score matching was genius and then, not so much. Instrumental variables still has this rather pesky problem that we can’t actually prove one of its key components. It’s not to say these tools don’t have value. When implemented correctly, they can indeed point us to novel and interesting insights about human behavior. And we certainly should continue to use the tools we have and find better ways to implement them, but the existence of big data shouldn’t mean we throw more data at these same models, which we know to be flawed, and hope that we can figure out the world. If we’re indeed moving towards more empirical economics (which is truthfully the part I practice and am most familiar with), we still need better tools. The models, the theory, the strategies for identification have to keep evolving.

Big data is part of the solution, but it can’t be the only solution.

Time use and hindsight

I am in the midst of revising a paper that uses a very specific question from the Fragile Families Data set about reading to children. When I began writing the paper, I started looking for evidence with time-use surveys, such as the American Time Use Suvey (ATUS) which asks participants to record everything they do and for how many minutes on two given days (a weekday and a weekend, usually). I noticed, particularly at the PAA meetings this Spring, that there was a lot of controversy about these surveys. What, exactly, can they tell us about general effects, when we are looking at such a small sample of time for any given individual? More specifically, if we want to examine the effects of a particular policy, how does looking at one individual’s day give us a causal effect of a policy? Time use surveys are incredibly useful for seeing exactly how individual spends his time on any given day, and the possibilities for understanding the dynamics of child-rearing and marriage are far-reaching. The trade-off is that you have no way of knowing whether this is a typical day or not. On average, for the population, if we have a random sample of individuals and days are sufficiently randomly assigned, we should get an idea of what the population does, on average. But asking if a particular impetus leads to a specific behavioral change (for instance, does an increase in income mean you invest more in child’s education) is a little more problematic. The alternative is to ask questions in a survey setting about time-use behaviors without specifying the time. That’s what the Fragile Families does, and the question about how many days per week you read with your child has its own problems. I have long argued that when individuals answer the question, they must do some averaging over time. The question is not “how many days did you read with your child last week” as might be preferred or indicated by the literature on work (did you work last week?), but rather a sort of what do you usually do? I’ve been surprised at how much pushback I’ve received on this matter from discussants and reviewers. Most say the natural model to use is a count model, like negative binomial or Poisson, but I think it makes more sense to use an ordered probit, which allows for 4 to be more than 2, but not necessarily twice as much as 2. I don’t think the reading days answer is as firmly countable and identifiable as something like parking tickets, where a count model is the readily apparent model. I imagine the question is a lot like exercise. Over the weekend, I helped a friend with her match.com profile and one of the questions is how many days a week do you exercise? For some, the answer is absolutely 7, every single day. For others, zero, not lifting a finger. For most, though, I’d guess it varies from week to week. One week, you go every day, the next week is busy at work, so you go less often. Perhaps you go on a whole-day hike and tell me two days instead of one because you don’t want to seem lazy. Thus, when I ask you the question of how many days a week you exercise, you’re not really giving me a straight answer, through no fault of your own. You’re averaging over the last couple of weeks, you’re perhaps adjusting your answer to reflect what you think the surveyor is looking for, and you’re partially giving an impression of how much you value exercise. I’m having a hard time making this same argument regarding time spent with children to discussants and reviewers, and I’m not sure what I’m missing in my explanation to make it more convincing.  

Spatial auto-correlation is not causation

There’s a strong tendency in human nature to draw distinctions along dichotomous lines. Good and evil, black and white, ugly and pretty. We all know that these distinctions only really work in children’s fiction, and even then tend to fall flat, but we try anyway. In teaching, particularly a new subject, those dichotomies are both useful and can lead to the downfall of a lesson.

In that vein, the instructor in my spatial econometrics workshop last week presented two significant data issues that a researcher might encounter in using spatial data: spatial heterogeneity and spatial dependence.

By way of definition: spatial heterogeneity is simply that there is something about an area or a piece of space that is different than the spaces around it. My dichotomizing, learning mind went immediately to the idea of observables. Clearly, if we are trying to include spatial information–location–in a regression, we know that the area has certain characteristics. As long as we explicitly control for these in our regression (and believe they are accurately measured), it doesn’t present much of a problem.

However, this is not always the case due to the level of analysis problem. In a general econometric specification, we control for the unit of spatial analysis that is relevant–county, Metropolitan Statistical Area (MSA), state, whatever it may be. By choosing the level and assigning a dummy variable, perhaps, we assume that all those characteristics are captured uniquely, but also that they are assigned independently to the spatial unit. Take for instance the distribution of the African-American population in the United States. Regression analysis that uses that variable as a covariate assumes that the number of African-Americans in Georgia is independent from the number of African-Americans in South Carolina, which makes little intuitive sense. Both were states with large plantation economies that employed Black slaves from Africa in production of goods. It makes sense that these two states, spatially proximate, would also have similar factors leading to their demographic makeup. Thus, spatial heterogeneity: areas in the South have higher Black populations than in the North.

The corollary to spatial heterogeneity is spatial dependence. Like spatial heterogeneity, we see patterns occur in certain variables, but rather than an outside, perhaps observable and easily measurable factor that accounts for the clustering, there’s something inherent about the place itself that causes proximate areas to change their realization of some variable. Think of housing prices. Housing prices are higher in places with certain amenities (close to transportation, mountains, whatever), but housing prices are also higher in areas with higher housing prices. Perhaps homeowners see their neighbors selling their houses for more and thus put them on the market for more. Or buyers see houses in the area with higher values and thus are willing to spend more. This spills over county and other lines, too.

Both of these problems, regardless of how strict that line is between the two, manifest in spatial auto-correlation. The variation we see in each variable for two spatially proximate observations is less than the variation for two independently observations because the information comes from the same place. Some of this we can control for, some of it we can’t, and some of it we can try to control for with the tools I’ll discuss in coming days.

Regardless, it’s important to remember that the realization of spatial heterogeneity and spatial dependence is the same mathematically. Statistically, we cannot differentiate between whether some unobservable variable caused everything to be higher, or whether each observation is exerting an effect on its neighbors (a butterfly flaps its wings…). So, even with acknowledgement of these problems, we have not established causation.

A familiar refrain is, thus, minimally modified: spatial auto-correlation is not causation.

A note on correlation and causation: (see Marc Bellemare’s primer for a more detailed explanation)

Anyone who has ever taken a statistics course is familiar with the refrain that correlation is not causation. It’s a common refrain because it’s something that is often ignored when statistics are cited in news articles and personal anecdotes. My favorite example of this is that ice cream sales and murder rates are highly correlated. Only the biggest of scrooges would believe that ice cream sales caused murder rates to increase. In the abridged words of Elle Woods, happy people don’t kill people. And in my words, ice cream makes people happy.

They do move together, though, which is essentially the definition of correlation. When ice cream sales go up, murder rates go up; when murder rates go down, ice cream sales go down. Not because one causes the other, but rather because of the seasonality of both variables. More homicides occur in the summertime, and more ice cream is sold in the summertime.

Spatial Econometrics: The Miniseries

Last week, I spent three days in a workshop (or short course) on spatial econometrics at the University of Colorado‘s interdisciplinary population center, the Institute for Behavioral Science. At the beginning of last semester, many of my methods students expressed interest in doing their research papers on a topic with a significant spatial component. I would have loved for them to incorporate spatial analysis, but it was a topic I had touched only tangentially and didn’t feel qualified to learn it at the same time as teaching that (incredibly demanding) course for the first(ish) time. In addition, having just attended the PAA meetings in San Francisco, I’ve been looking for ways to expand my econometric skills and incorporate spatial data into my work. It was really fantastic. I don’t know whether they’ll be hosting the event again next summer, but do keep a lookout if you’re interested. I thought it was extremely helpful. And fun (see nerdy tweets from last week about loving matrix algebra). Paul Voss, of the University of North Carolina’s Population Center, Elisabeth Root, and Seth Spielman were all great.

I posted a short introduction to spatial econometrics last week based on my readings for the first class and am now excited to share some of the things I learned, so over the next few weeks, I’ll post some of my thoughts in a mini-series on spatial econometrics. This post will be updated with a list of posts in the series, so do follow along.

Experts, please keep me honest! This stuff is very cool, but I’m still a newbie.

Preliminary outline (subject to change):

  1. An introduction to Spatial Econometrics
  2. Spatial Autocorrelation is Not Causation
  3. The Weights Matrix for Spatial Analysis
  4. Some Notes on Terminology in Spatial Econometrics