8 Questions Teachers Should Ask When Giving Assessments


8 Questions Teachers Should Ask When Giving AssessmentsIn a blog post earlier this year, Justin Tarte offered 10 questions to ask yourself before giving an assessment. In a four part blog series, we identified five characteristics of quality educational assessments. In my blog post here I thought it would be interesting to do a mash-up of the two and provide those questions I’d ask myself about prior to giving an assessment.

1. What’s the purpose of the assessment? How will the results be used and by whom?

Purpose and use top the list when it comes to assessment. If we keep in mind that the student is at the heart of all assessment, then all assessment should support student learning. The SAGE Handbook of Research on Classroom Assessment has a quote on page 97 that I often use: The primary purpose of assessment is not to measure but to further learning.

2. How can students use the assessment as a learning tool and teachers use it as a support for learning?

This is the essence of consequential relevance. When educators spend precious instructional time administering and scoring assessments, the utility of the results should be worth the time and effort spent. They want to understand the results and use them to meaningfully adjust instruction and better support student learning. This is what consequential relevance is.

Of course if we’re talking about formative assessment practices – the day-to-day and minute-to-minute kind – then the assessment is not one of learning, but for learning. These are the types of assessments that teachers can use regularly elicit evidence of student learning and identify changes in their teaching that are needed to move students forward.

3. What role will students play in the design of the assessment or the assessment process?

Research shows that when students help develop questions for an assessment, and have a deeper understanding of what they are expected to learn before they take the assessment, they take a greater responsibility of their own learning. And this makes sense; the activity enables students to better understand what teachers expect them to know, understand, or be able to do, as well as what constitutes a proficient performance. This, in turn, allows students to support each other and take responsibility for their own learning by helping them accurately and appropriately evaluate learning against shared expectations and make any necessary adjustments to the learning.

4. How valid will the assessment be?

Aligning the assessment to the learning targets, objectives, and goals, and the way those were taught is important, as is determining if one or more than one target, objective, or goal will be measured in one assessment. Validity allows both students and teachers to make inferences about what students know, understand and can do. Assessing what was taught in the manner is was taught and learned produces stronger inferences.

The next level where content validity matters is the assessment experience itself, meaning when the student sits down to take the assessment, what test questions, or items, do they see?  In a fixed form, grade level test, most or all students see the same item set, namely those assessing the grade-level standards to which the student is assigned. In a cross-grade, computer adaptive test, such as MAP, an item selection algorithm presents each student with items from a broad range of standards, from across grades, and adapts to the in-the-moment performance of the test taker. Each student sees items at the difficulty level that’s appropriate for them, based on their previous responses. This model of adaptivity provides precise information about a student’s learning and performance in a domain area.

5. What kinds of questions will provide information on what students know and don’t know, and where they need to go next?

In an interim assessment, question types will allow students to demonstrate achievement at different depths of knowledge, from simple recall to more complex sense-making to building a case using evidence.

With formative assessment practice, this information is derived less by test questions, and more from the way the teacher poses a question and reviews the answers. If the entire class is required to complete an exit ticket (for example) the teacher will be able to quickly determine what students know (or don’t) and what they need to do to get to the next phase in a particular learning.

6. Will you provide multiple assessment formats from which students may choose?

This question relates more to student choice in demonstrating what they know and can do than responding to a “test.” This question may cause us to also consider the topic of fairness. Does each student have the same chance to show what they understand, know, or can do?

Accessibility (part of that fairness) in educational assessment translates into the tools, assists, devices, and accommodations that are allowed so that students can either take the same test as their peers, or have an equivalent assessment experience. At a classroom level, teachers are acutely aware when issues of accessibility due to linguistic, physical, cognitive, or emotional capabilities arise.  In a school ecosystem, there are teams of support providers, including classroom and special education teachers, tutors, school psychologists, case workers and social services personnel focused on ensuring that students have equal access to the same educational opportunities as their peers.

7. How will students use the assessment to verify their self-assessment and monitor their progress toward the targets/goals/objectives?

Engagement can be observed through response patterns, word count, time spent on the item, etc., but motivation can only be inferred. Engagement and motivation are not the same thing. A student demonstrates engagement because of his or her motivation. A student can be motivated to do his or her best in an assessment experience for a variety of reasons, generally organized into an external/internal schema. External motivations to demonstrate engagement might come from prizes or rewards tied to assessment performance, or, conversely, punishments or ‘consequences’. Internal motivation to demonstrate engagement might come from the desire to do well on the test, or garner positive attention or praise, or a competitive urge to outperform peers.

8. Does the investment of time in preparing, administering and scoring the assessment pay off for both students and teachers?

Assessment data can help to answer all kinds of questions and inform a variety of different decisions.  These questions can range from whether a student has shown proficiency on a state summative exam, to whether students have performed well enough to earn college credit via an AP exam, and whether and how much each student has grown during different intervals of the academic year. The usefulness of an educational assessment resides in the utility of its data to help inform decisions, and for that, you have to understand what the data mean. Some questions to help you get there include:  what exactly did the assessment measure? What didn’t it measure? Given that, what can the data tell you?  What can’t the data tell you?  What kind of inferences can be made from the data?  And, what kind of decisions can the data reasonably inform?

I once heard Rick Stiggins ask the question, “What assessment could you give today that they wouldn’t want to miss?” I suppose the answer to this question is one we’re all ultimately seeking?

If you’re an educator what kind of questions would you ask before administering an educational assessment? And if you’re a student, what would your ultimate assessment look like?

Please Like, tweet and +1 this post using the buttons :

Kathy Dyer

Kathy Dyer is a Sr. Curriculum Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver.

Using Growth Data to Measure Program Effectiveness

Access the Free On-Demand Webinar Now

Join Dr. Cronin, Director of the Kingsbury Center at NWEA, to learn how measuring growth can help inform critical decisions that support academic success.

Free On-Demand Webinar

Leave a Reply

Name
Email
Website
If you want a picture to show with your comment, go get a Gravatar

Subscribe Feed

Click the "Subscribe" button below to receive our blog feed.

Stay Informed

Click the "Stay Informed" button below to signup to receive our periodic email updates


Privacy guaranteed, We'll never share your info.