In a recent webinar discussing the promise of the assessment consortia, Joan Herman, Director of the Center for Research on Evaluation, Standards, and Student Learning (CRESST) at UCLA, and Chris Minnich, Executive Director of CCSSO, state that consortia assessments would drive the implementation of the Common Core State Standards (CCSS). Harmon noted that test-driven curriculum has been historically true. Basing her judgment only on the consortia’s plans, Harmon was optimistic, though cautionary, that they will offer tests worth teaching to.
The notion of “tests worth teaching to” is interesting and got me thinking back to the days before NCLB in Maryland. During the 1990s and through 2002, Maryland had MSPAP, an assessment program that was billed as an educational assessment worth teaching to. In fact, MSPAP was instituted—at least as the narrative goes—to promote good instruction. MSPAP consisted of a series of performance assessment tasks administered over a week to third, fifth and eighth graders that were designed to assess:
1. How well students solved problems cooperatively and individually.
2. How well students applied what they learned to real world problems.
3. How well students could relate and use knowledge from different subject areas.
A typical MSPAP task might involve a science experiment with data collection conducted in small groups. Students would individually perform calculations, make graphs, and finally write a response or responses to explain their results and their thinking. The whole task might be parsed into a science score, a math score, and a writing score. Thus this educational assessment promoted instruction that focused on hands-on group work with individual accountability based around real world problems.
Though individual students were not given scores on MSPAP, schools were. Scores were widely reported in newspapers with schools ranked. MSPAP became very high stakes for administrators and teachers. Soon a cottage industry developed around MSPAP preparation and success, and MSPAP workbooks appeared from a variety of publishers. The net effect of all the MSPAP success work was rote and formulaic instruction—the exact opposite of the intended effect of the program. For one example, writing in response to reading became reduced to a a mnemonic-cued response: RACE. Students were taught to Restate the question, Answer the question, Cite evidence from the text, and Extend the answer to their life. Students were taught to write the four sentence response rather than really engage with the question.
Quickly writing was reduced to RACE. Great debates arose about whether the R was unnecessary and students need only ACE their writing. These kinds of debate only make sense when you step back and look at the context—MSPAP success was critical and teachers would do whatever was necessary to assure good scores. Most districts covered their classroom walls with posters aimed at MSPAP success like RACE/ACE posters for writing or TAILS posters for making graphs. Though mnemonics may be useful aids, it is a problem when they become the goal of instruction. Though there was much to like about MSPAP, there was just as much to dislike – particularly this workbook driven formulaic approach to instruction. Whenever anyone mentions the idea of educational assessments worth teaching to, I think back to my MSPAP experiences and sigh.
The second part of Tests Worth Teaching To should be posted soon, so come on back.