Striking a balance in data collection

A big part of my research time is spent on violence against women, gender-based violence, domestic violence, and harmful traditional practices. Though sometimes all whipped into a category of “women’s issues,” I’ve argued before that these are problems that everyone should care about, that they exert severe effects on our health and well-being as a society, emotionally, physically and economically.

Currently, I’m mired in two data collection projects, both with various degrees of hopelessness. I’ll write more later about my time in Caracas, but suffice it to say for now that there simply isn’t data available on issues like the ones I mention above. Or if it is available, no one’s going to give it to me. No surveys, no police data, no statistics on hotline use, nothing. We don’t know anything.

Conversely, in a meta-analysis of programs for adolescent girls that I’m writing with a colleague, my coauthor came upon a study suggesting that in order to correctly assess prevalence of Female Genital Mutilation (FGM) we should submit randomly selected female villagers in rural areas to physical exams.

I was shocked and disgusted when she sent me the study. I don’t doubt for a minute that the most accurate way to gauge prevalence of FGM is to randomly select women and examine them, but seriously? I am astounded that no one thought through the psychological consequences of women who have already been victims of gender-based violence being examined by a foreigner who thinks they are lying about whether they’ve been cut.

These days, it’s a good reminder for me that in collecting data there is such a thing as too much, and such a thing as not enough. It’s all about striking a balance.

The downfall of data

The PAAs last week were all about data. The exhibits at the conference were sponsored by various longitudinal surveys such as the PSID, the Mexican Family and Life Survey, RAND FLS and more. As I perused the poster sessions, it was amazing how many posters came from employees at the US Census Bureau. Having interviewed there last year, I was aware of their numbers, but the PAAs bring to light just how much work they are doing at the Census to illuminate American life. Beyond that, presentations used the Fragile Families and Child Wellbeing data, as I do, the NLSY, the ACS, the Mexican Migration Project, and so many more. The concentration on data was unlike I’ve seen at any other conference. Theory was definitely not a big focus.

So, it’s with sadness that today I saw the news that the House voted to cut funding for the American Community Survey, a Census Bureau instrument that tracks all sorts of data about Americans. I received the survey at my home in Boulder shortly after the decennial Census. My roommates, feeling survey fatigued, refused to fill it out, but I, being the economist and possible eventual end-user of this data, went ahead. I also encouraged friends and family to fill out their Census forms.

This comes on the heels of funding being cut for the NLSY (though restored for FY 2012), a concurrent distaste for political science research in the House, and doesn’t bode well for other demographic endeavors. Economists, sociologists, anthropologists, biostatisticians, public health researchers, epidemiologists, political scientists and more depend on these data–from studies already in existence and to-be-collected–to do meaningful and interesting research. While (sometimes) privately funded, small-scale longitudinal studies like the Fragile Families study provide a good snapshot of groups, only nationwide, representative studies can help us to know what is going on in the country as a whole.

The link above claimed the survey was an unconstitutional invasion of privacy. Which is absolute crap. The US government does things that are far more invasive than ask how many years you went to school and how many flush toilets you have. And far less useful.

Update: John Sides talks about his NSF Grant and similarly cut funding for political science research on the Monkey Cage blog.

More of them or more willing to say it?

Today’s NYT had an article on the cities with the highest proportion of gay couples. Interestingly, the list doesn’t include many high-density cities or the well-known gay neighborhoods. The lack of historical data and rapidly changing social norms make it difficult to differentiate between whether there are simply more gay couples living in places like Rehoboth Beach, DE, or whether they’re simply more visible and more willing to disclose their orientation.

While this limitation means we cannot makeĀ  statements about the changing demographics in these cities, I think it does say something pretty profound about standards of acceptable social behavior in small towns and, to some extent, all over the country.