Compulsory education and girls in China

A new paper (gated) by a gaggle of economists (is this a new trend? I’ve never seen so many papers with five or six names on them than as of late), shows that compulsory schooling in China helped raise average educational attainment, and did a particularly good job of getting girls to stay in school. Girls stayed in school an average of 1.17 years longer, and boys an extra 0.4 years. I’ve yet to really get into this paper, but they use what looks like a neat instrument to identify the effect causally. The compulsory education policy was implemented at different times, so different regions were subject to the policy at different times.

The abstract:

As China transforms from a socialist planned economy to a market-oriented economy, its returns to education are expected to rise to meet those found in middle-income established market economies. This study employs a plausible instrument for education: the China Compulsory Education Law of 1986. We use differences among provinces in the dates of effective implementation of the compulsory education law to show that the law raised overall educational attainment in China by about 0.8 years of schooling. We then use this instrumental variable to control for the endogeneity of education and estimate the returns to an additional year of schooling in 1997-2006. Results imply that the overall returns to education are approximately 20 percent per year on average in contemporary China, fairly consistent with returns found in most industrialized economies. Returns differ among subpopulations; they increase after controlling for endogeneity of education.

“The Returns to Education in China: Evidence from the 1986 Compulsory Education Law.”
Hai Fang, Karen N. Eggleston, John A. Rizzo, Scott Rozelle, and Richard J. Zeckhauser
NBER Working Paper No. 18189, June 2012

Advertisement

An education story, not an age story

Like much of changing and exciting news in demography, the New York Times’ story about births to women under 30 appears to be largely about education. Kathryn Edin, who wrote a book I’ve lauded several times in this space and use extensively in my own research, responds in an article Harvard Magazine.

“What the article essentially got wrong is that this is aneducation story, not an age story,” explains Edin, professor of public policy and management at Harvard Kennedy School and a prominent scholar of the American family. She points out that 94 percent of births to college-educated women today occur within marriage (a rate virtually unchanged from a generation ago), whereas the real change has taken place at the bottom of the socioeconomic ladder. In 1960 it didn’t matter whether you were rich or poor, college-educated or a high-school dropout—almost all American women waited until they were married to have kids. Now 57 percent of women with high-school degrees or less education are unmarried when they bear their first child.

The statistic put forth by the Times severely undercounts the issue when we don’t take into account education. College-educated women, it seems, are waiting for marriage to have kids, and non-college-educated women are having kids before they’re married. Importantly, it’s still a large group of women that are choosing to have kids without being married, and as I argue in my dissertation, it’s a group that merits more attention. We don’t know much about them.

Testing, incentives, and low-achieving students, redux

Last week, a few kind words from a friend turned into an extended conversation about testing structures and incentives for teachers to help low-achieving students. Mark’s organization is unique and very cool because it targets the lowest achievers, students Mark posited are the least likely to benefit from the incentives provided by standardized testing to maximize the pass rate. Brett Keller responded with a link to a discussion of an article from the Review of Economics and Statistics that basically confirmed Mark’s thinking.

Below is a quick summary of a long, dense paper and lessons learned. In short, Mark, yes, research backs up your intuition. From “Left Behind by Design: Proficiency Counts and Test-based Accountability” by Derek Neal and Diane Whitmore Schanzenbach:

The use of proficiency counts as performance measures provides strong incentives for schools to focus on students who are near the proficiency standard but weak incentives to devote extra attention to students who are already proficient or have little chance of becoming proficient in the near term.

Students who might just need a little extra push to get to the passing mark are going to get any extra teaching effort that is encouraged by the testing system itself, and even may draw effort that might have gone to students at the ends of the distribution. It seems that this problem at least would unite parents of the highest and lowest achievers in protest. Low achieving students are left behind and high-ability students make no gains either. This system is clearly not beneficial to anyone except the marginal passers and ensures that low-achieving students never have an opportunity to catch up.

The continual process of raising the standards only makes worse the distribution problem. In their model, an increase in the proficiency standard necessarily increases the number of high-ability students receiving extra attention, thus decreasing the number of low-achieving students receiving extra attention.

The study was also repeated with low-stakes testing, where the individual student may have had something to gain by passing (not going to summer school), but the school had little to gain. The lopsided distribution of effort didn’t appear in these cases.

Derek Neal and Diane Whitmore Schanzenbach. 2010 “Left Behind by Design: Proficiency Counts and Test-based Accountability.” Review of Economics and Statistics 92(2); 263-283.

Tests, incentives, and low-achieving students

In the midst of my paper-reading/grading marathon over the weekend, I expressed some frustration on twitter and got some pretty wonderful responses from friends. In particular, one friend who runs a non-profit in DC sent me an immediate gchat, “I believe in you; you can do it.” It managed to snap me out of it and put a smile on my face, but then also morphed into a discussion about the quality of students’ writing. Mark’s contention was that writing skills have in fact declined over time, largely because composition, grammar, and spelling aren’t emphasized any longer in school curricula. It’s not tested, so it’s not taught. I confessed my inability to make a claim about the decline given my limited tenure as a teacher and lack of good comparisons. I think I’m a pretty good writer.

This resulted in Mark calling me arrogant, so I had to laugh a little when Mark’s recent blog post for Reach, Inc. had an arrogance-related title, but he also brings up another really important point regarding incentives and testing in schools.

It is true that incentives are not aligned to support the work we do. If a student comes to Reach reading in the 5th percentile, he or she can make 2-3 years of reading growth and still be labeled a failure on standardized tests. This means, in an environment with limited resources, it actually doesn’t make sense for a school to invest in that child’s learning. The incentives push schools to focus on those students that can go from failing to passing.

I’ll admit that I’m only cursorily familiar with the practices and rewards of the public school system and testing, but I am pretty sure that we haven’t it gotten right yet. A system that rewards or punishes based on the mean or median or a dichotomous pass/fail and ignores distribution and progress is necessarily going to leave a lot of students behind. As Mark suggests, it makes it near impossible for individual students to catch up, not only because it’s hard work, but because there’s little immediate reward for stakeholders to do the pushing. It works the same way with writing. There’s not a good way to test writing, so we don’t test it, and thus it’s not emphasized in school, leading to worse outcomes in writing.

Mark’s work reminded of a paper I saw presented at CU this winter. In an RCT in Togo (or Benin? The researcher was from one of those and did the work in the other) an experiment was set up to see how different incentives schemes could reward cooperation to study for standardized tests and how that affected student outcomes from different parts of the ability distribution. The results make cooperation look pretty good. I of course, cannot remember the job candidate’s name or the title of the paper, but I’m going to find it. Don’t worry.

More on Education and TFA

A week or so ago, Matthew diCarlo of the Shanker Institute posted on the Shanker Blog a post exploring the link between teacher performance and the much-lauded, much-criticized, and thus, controversial, program Teach for America. TFA, as it is known, puts high-achieving, service-oriented college grads into classrooms in high-need areas all over the country for a period of two years. It’s an extremely competitive program. My senior year of college, I watched several close friends navigate the process and succeed, while another close friend did not get a spot. Ironically, the one who entered the education system as an emergency teacher taught for several more years than the TFAers.

Matt diCarlo provides a quick and dirty review of the literature that rests on this:

Yet, at least by the standard of test-based productivity, TFA teachers really don’t do better, on average, than their peers, and when there are demonstrated differences, they are often relatively small and concentrated in math (the latter, by the way, might suggest the role of unobserved differences in content knowledge). Now, again, there is some variation in the findings, and the number and scope of these analyses are limited – we’re nowhere near some kind of research consensus on these comparisons of test-based productivity, to say nothing of other sorts of student outcomes.

The assertion, and indeed the post, is filled with caveats, conditions, and couching, which serves to tell me that Matt is likely a reasonable person and certainly an economist. It also underscores how difficult it is to analyze teacher performance with standardized tests, something which Dana Goldstein explores a bit today.

Both Matt diCarlo and a linked post at Modeled Behavior suggest that “talent” at least as measured by the private sector, isn’t a good indication of teacher effectiveness. While that’s interesting, I’m curious what is?

What makes a good teacher? At any level? I’m curious because–among other reasons–I  think I’m a pretty good teacher. I would imagine that most of us like to think we’re good at our jobs. If the skills that make me a good (or average, or mediocre, or bad) teacher aren’t the same ones that would help me in other markets, what are they? And perhaps more importantly, why are we asking the people in the private sector, which hasn’t enumerated the qualities of a good teacher and doesn’t reward them, what entails good teaching? And shouldn’t we figure this out before we go about firing “bad” teachers as a means trying to improve student outcomes?

h/t @ModeledBehavior

On E-Universities

Megan McArdle tackles the future of society and universities in a recent article at The Atlantic. In response to a post on the future of universities by Stephen Gordon at the Boston Globe, she enumerates her predictions for how societies will change if universities change to a totally online model.

Both McArdle and Gordon place great emphasis on cost, and perhaps not wrongly. Gordon claims that because they can hire an MITx credentialed student for cheaper than a regular university grad due to lack of student loans, the MITx model win win. McArdle says that the economies-of-scale that result will make us all go to the cheaper option and she thinks that’s good. But there are a couple of assumptions that are implicit in the analysis that I find incredibly disturbing. And not just because it would likely put me out of a job.

The first is that it’s valuable to have everyone learn the same thing. I find this horrifying. Yes, it’s useful if everyone used the same computer programming language, but if they did, then things wouldn’t progress. They become entrenched, like the QWERTY keyboard, which we all know is inefficient, and yet we learn and use it anyway. I think it’s great that most economists use Stata, but I also think it’s great that some use SAS, so that if I needed something done in SAS–which handles large datasets much better, while Stata is perhaps simpler to learn–I could get it done.

I want to know people who have read different books and studied different thinkers and learned different ways of studying or learning about the world. I think life would be incredibly boring otherwise.

Secondly, though McArdle mentions it, I think both authors severely underestimate the networking effect of college. McArdle says that we’ll need to find a different way to essentially make friends, but I think it’s more than that.

People I know from college represent not only many of my close friends, but also collaborators, colleagues, coauthors, references, providers of services, and directors of charities I support. If I wanted to go into investment banking or consulting or medicine or some other field, I have a list of people I would call for advice and to let them know what I was hoping to find, work-wise. I’d imagine that at least one Duke alum, if not many, would aid in my career change or become a client down the line.

This is not unique to Duke. If I’d gone to CU or Stanford or UVA or Metropolitan State, those networks would still be important. And important to my employer, not just to me. I think employers recognize this. Education signalling is not just about quality (regardless of noise levels), there’s also an assumption that who you know might matter at some point, as well.

Besides, what the heck are journalists going to cover if researchers aren’t putting out papers and books?

Why we educate women

The World Bank’s Development Impact Blog has recently been hosting guest posts from job market candidates in economics and a few days ago, Berk Ozler, a regular contributor, decided to synthesize some of the lessons from their papers and one by Rob Jensen (forthcoming in the QJE). With a brief mention of the fact that some are working papers, and certainly subject to change, Ozler concludes that we’ve been going about increasing women’s educational attainment in the developing world in the wrong way. Backward, he calls it. Instead of making it easier for women to go to school by providing school uniforms or scholarships or meals, we should be concentrating on changing women’s opportunities to work. If women see the possibility of work or higher wages or more openings, then they will likely demand more education for themselves or for their female children.

From a purely incentive-based approach, it makes perfect sense. If female children are likely to bring in earnings, particularly if they might be comparable to or even higher than their brothers, then parents have an incentive to educate female children. Higher earnings perhaps mean better marriage matches, but most certainly mean better insurance for parents as they age. Women with their own incomes can choose to take care of their parents.

From a feminist perspective, however, it’s a bit problematic. Such analysis implicitly values waged work over non-waged work, a problem inherent in many economics questions, most apparent in how we measure GDP. We know that increasing women’s education levels is valuable in and of itself, regardless of whether those women go on to work. More education for women means later marriage, lower fertility, reduced HIV/AIDS transmission, reduced FGM, and more.

It’s reasonable to think that regardless of how we set up the incentives–either by showcasing opportunity or reducing the immediate costs of schooling–all of these things will happen. And certainly job creation and the encouragement of seeking new opportunities to work is desirable. But if we choose to focus all of our resources on showcasing opportunity (particularly when it may set up unrealistic or very difficult to achieve expectations. note I haven’t read the Jensen paper yet), then we reinforce the idea that “women’s work”, or work in the home, is worth less than waged work.

In a world where a woman becomes educated in hopes of finding work, but doesn’t, how does that affect her ability to make household decisions? To leave an abusive spouse? To educate her own children, male and female, equally? Jensen’s paper seems to imply the very promise of women’s wages is enough to change bargaining power, but I wonder if that will stick. Does failure to find work, for whatever reason when it is understood to be the sole goal of attaining more schooling, affect women’s status?

%d bloggers like this: