In Multiple Measures Done Right, John Cronin talked about seven principles of coherent assessment systems. Principle 6 is the metrics and incentives used encourage a focus on all learners. As we move into ESSA implementation, it seems appropriate to dig into that one.
When No Child Left Behind (NCLB) was introduced, it was about success for all students. However, in practice the incentives didn’t focus on all. Proficiency was the goal, and even though we talked about moving all students to proficiency, we may have focused more on “bubble kids” – those students who were so close to being proficient that a little extra help or focus might move them across the line to proficiency (or so we thought). In a webinar on these principles, Dr. Cronin showed and explained, “If the metric by which you are evaluating the school was improvement in the proportion of students who were proficient, the problem that you would have is that that metric is only impacting about 15% of all students in the school system.” How might we consider data that are impacting the other 85% of students?
Think about it for a minute. In your school, which data or metric gets the most focus? What gets used for goal setting? What gets talked about most often in data conversations? Which metric is used in teacher evaluation? Then consider: does this same metric include all learners? Does the metric give you information or goals geared only toward proficiency? Now here are a couple of big questions:
- What metric are you using that guarantees you are looking at growth for every single learner regardless of his or her proficiency status?
- Which metric drives decision making?
Do you look at and talk about trend data for individual students? Do you include program participation data, as well as student achievement data? What role do attendance, discipline, and data about extracurricular activities play in these conversations? Using actionable data that represent all students as metrics in your decision making is key.
As an example, in our district AVID (Advancement Via Individual Determination) was the program we wanted to evaluate. We collected a variety of metrics to include: state test scores, end-of-course grades, district assessment scores, attendance, course schedules, ACT scores, AP course participation, and discipline. We looked at proficiency, and we looked at growth. One trend we noticed was that the number of students enrolled in AVID dropped from their initial enrollment in the 9th grade through their senior year. We put together a survey to find out why. As you might imagine, the reasons varied from the supports were no longer needed to no time in the schedule. What we also discovered was that some students were growing—but not all. Partly because our metrics of focus were those that spoke to proficiency. NCLB has also taught us that what gets measured gets attention. Which of the metrics or what other metrics might have provided us key information on an ongoing basis? How might we have better met the needs of all students in that program?
Measuring proficiency requires measuring what a student knows at one point in time, like at the end of the year, and then comparing the student’s score with a predetermined cut score. We could make an analogy here with the height signs on amusement park rides. If a child meets the height requirement, he or she may board the ride. Measuring growth answers a different question. That is, between two different points in time, such as the beginning of the year and the end of the year, how much did this student’s performance change? Changing our paradigm from a focus on proficiency to a focus on growth puts us in a position to truly talk about all kids and work with all kids.
Here’s another example. PD Consultant Pat Reeder shares that in Flint, Michigan, all means all. This at-risk district has used MAP data and formative pedagogy to shape the growth of all students. This data wall reflects every student in the school. Attention is given to all students at all times.
In Multiple Measures Done Right, three questions are provided to support districts in next steps:
- What three to five metrics drive decision making in our school or district? Is everything centered on the state assessment, or do we take advantage of other data sources?
- What behavior do those metrics incentivize, and are all students encompassed or required to improve in those metrics?
- Are programs including all kids, and are we sustaining participation of all kids over time? Revise the metrics you use to evaluate programs so that they reward not only improvements in performance, but also programs that are increasing the number and diversity of students participating.
We’d love to hear what is happening in your school or district as it relates to any of these action steps; tell us on Facebook or Twitter.
This blog post is part of a series on Multiple Measures Done Right: The Seven Principles of a Coherent Assessment System. Check out our on-demand webinar for more insights on building assessment systems that work, or the previous posts in this series: “Going for the Gold”; “How to ensure district goals drive your assessment selection”; “Why we need assessment literacy as part of teacher preparation”; and “Educators’ Superpowers; Activate!”