Last week, a few kind words from a friend turned into an extended conversation about testing structures and incentives for teachers to help low-achieving students. Mark’s organization is unique and very cool because it targets the lowest achievers, students Mark posited are the least likely to benefit from the incentives provided by standardized testing to maximize the pass rate. Brett Keller responded with a link to a discussion of an article from the Review of Economics and Statistics that basically confirmed Mark’s thinking.
Below is a quick summary of a long, dense paper and lessons learned. In short, Mark, yes, research backs up your intuition. From “Left Behind by Design: Proficiency Counts and Test-based Accountability” by Derek Neal and Diane Whitmore Schanzenbach:
The use of proficiency counts as performance measures provides strong incentives for schools to focus on students who are near the proficiency standard but weak incentives to devote extra attention to students who are already proficient or have little chance of becoming proficient in the near term.
Students who might just need a little extra push to get to the passing mark are going to get any extra teaching effort that is encouraged by the testing system itself, and even may draw effort that might have gone to students at the ends of the distribution. It seems that this problem at least would unite parents of the highest and lowest achievers in protest. Low achieving students are left behind and high-ability students make no gains either. This system is clearly not beneficial to anyone except the marginal passers and ensures that low-achieving students never have an opportunity to catch up.
The continual process of raising the standards only makes worse the distribution problem. In their model, an increase in the proficiency standard necessarily increases the number of high-ability students receiving extra attention, thus decreasing the number of low-achieving students receiving extra attention.
The study was also repeated with low-stakes testing, where the individual student may have had something to gain by passing (not going to summer school), but the school had little to gain. The lopsided distribution of effort didn’t appear in these cases.
Derek Neal and Diane Whitmore Schanzenbach. 2010 “Left Behind by Design: Proficiency Counts and Test-based Accountability.” Review of Economics and Statistics 92(2); 263-283.
In the midst of my paper-reading/grading marathon over the weekend, I expressed some frustration on twitter and got some pretty wonderful responses from friends. In particular, one friend who runs a non-profit in DC sent me an immediate gchat, “I believe in you; you can do it.” It managed to snap me out of it and put a smile on my face, but then also morphed into a discussion about the quality of students’ writing. Mark’s contention was that writing skills have in fact declined over time, largely because composition, grammar, and spelling aren’t emphasized any longer in school curricula. It’s not tested, so it’s not taught. I confessed my inability to make a claim about the decline given my limited tenure as a teacher and lack of good comparisons. I think I’m a pretty good writer.
This resulted in Mark calling me arrogant, so I had to laugh a little when Mark’s recent blog post for Reach, Inc. had an arrogance-related title, but he also brings up another really important point regarding incentives and testing in schools.
It is true that incentives are not aligned to support the work we do. If a student comes to Reach reading in the 5th percentile, he or she can make 2-3 years of reading growth and still be labeled a failure on standardized tests. This means, in an environment with limited resources, it actually doesn’t make sense for a school to invest in that child’s learning. The incentives push schools to focus on those students that can go from failing to passing.
I’ll admit that I’m only cursorily familiar with the practices and rewards of the public school system and testing, but I am pretty sure that we haven’t it gotten right yet. A system that rewards or punishes based on the mean or median or a dichotomous pass/fail and ignores distribution and progress is necessarily going to leave a lot of students behind. As Mark suggests, it makes it near impossible for individual students to catch up, not only because it’s hard work, but because there’s little immediate reward for stakeholders to do the pushing. It works the same way with writing. There’s not a good way to test writing, so we don’t test it, and thus it’s not emphasized in school, leading to worse outcomes in writing.
Mark’s work reminded of a paper I saw presented at CU this winter. In an RCT in Togo (or Benin? The researcher was from one of those and did the work in the other) an experiment was set up to see how different incentives schemes could reward cooperation to study for standardized tests and how that affected student outcomes from different parts of the ability distribution. The results make cooperation look pretty good. I of course, cannot remember the job candidate’s name or the title of the paper, but I’m going to find it. Don’t worry.