I spoke with a colleague last week whose university is hiring in Development this year. I was surprised, though perhaps I shouldn’t have been, that of six candidates they are flying out, five have job market papers using Randomized Control Trials. Maybe that’s an area that their department is trying to fill and thus that’s the kind of faculty they are interviewing, but it seemed odd along with a comment from another job market candidate.
A friend on the market, in development, told me that she felt she was having a hard time selling herself as a development economist. Without an RCT (and the requisite cash that accompanies these very-expensive projects), she didn’t feel like she was getting even enough attention to get a job. Her plan is to find a new line of research using US data in the next year to go on as a Labor economist.
I realize these are two very specific examples and might not be indicative of the market as a whole, but I do think that fads in economics are both fascinating and problematic. No single theoretical or empirical response to data issues is a panacea, and I wonder if we are putting too much stock in RCTs–and thus in those who were lucky enough or prescient enough–to get into them early. There’s still a lot of value in survey data, I think, and I hope we don’t lose those important results because of a love affair with RCTs.
I agree and this is not only for development economics. Bill Easterly on Aid Watch had a discussion on this. Not being specialized in economic development, I can’t speak for the field but in my field, I think surveys are useful even though it can be very difficult to conduct. However, I think the burden of proof is on you to show why your methodology and tools are (relatively) more appropriate for the question you address whether it’s survey, experiments, RCT, etc. I believe that your question dictates your methodology and tools.