This is the sixth in a series about building trajectory-based performance tasks. Stay tuned for follow-up posts, and watch our on-demand webinar on this topic.
In our last post, we explored why complex tasks in science and learning sciences are especially important. Ready to start designing trajectory-based performance tasks for your students? Here’s how.
Step 1: Define what should be measured
The first step is to determine exactly what will be measured. Earlier in this blog series, we talked about starting with a big idea. The complex, integrated skills you’ll measure will allow students to show mastery of the big idea by the end of the school year.
Step 2: Build the trajectory
Next, unpack standards and indicators to create a trajectory. The trajectory should be based on related, grade-level standards or indicators, as well as those from lower and upper grades, coupled with researching increases in reasoning of these intergradations of standards and indicators.
In this step, the standards and trajectory should be parsed to describe the content of the input and the content of the output that is needed to make inferences about how students grow in their skills over time. The input is what is connected to the phenomena.
Step 3: Choose the phenomena
Each stage in the trajectory needs a phenomenon. We’ve found that a single phenomenon can be explored in different ways to elicit the intended evidence of where students are in their learning. Once you have an understanding of what will be assessed, brainstorm about the phenomena.
[S]eek phenomena that provide opportunities to elicit evidence of learning from at least two consecutive stages along a trajectory.”
It may seem easier to start with a great phenomenon and then think about what could be measured using it, but such an approach can run into trouble. The phenomenon may not be rich enough to provide the depth or details needed to explore where students are in their trajectory of learning. (If your goal is to develop a phenomenon for only a single stage or two, this may be okay.)
A richer phenomenon allows for a more complex story to be developed. That complex story can support multiple waypoints designed to elicit student thinking. The waypoints can be designed to encourage the novice thinker and challenge the more advanced thinker. Such waypoints can be useful formative opportunities to (a) understand where students are currently and (b) support students in providing them practice opportunities to model or explain their thinking.
Thus, it is our suggestion to seek phenomena that provide opportunities to elicit evidence of learning from at least two consecutive stages along a trajectory. Identifying the content inputs you need to support the trajectory can help you establish features of phenomena that are necessary to measure students in different learning stages who have learning targets based on what skills they need to grow to get to proficiency.
Step 4: Identify the phenomena features
A feature is an attribute of the phenomena or task that can be altered to make it more appropriate for each level of difficulty. For example, if you want to anchor learning based on inquiry that leads to constructing explanations, research shows you should:
- Look for an observable event in a disciplinary core, such as The Universe and Its Stars or Growth and Development of Organisms, for which one or two models or theories are reasonable current explanations. If just one model or theory will be offered, provide information in a table or other form for which the one explanation provides a coherent and comprehensive account and additional information for which the explanation does not provide a coherent and comprehensive account, as Melissa Braaten and Mark Windschitl suggest and as Katherine McNeill and Joseph Krajcik also support. If two models or theories are offered, look for information that can be placed in a table or other form for which only one model or theory provides a coherent and comprehensive account and additional information for which both models or theories provide coherent and comprehensive accounts
- Consider the complexity of the data students will investigate or analyze. Should the data be highly explicit or have some variance? Thinking through how explicit versus subtle patterns can influence building theories is important to understanding when students can make predictions based on patterns and why they can create explanations on one occasion but not another
Items based on phenomenon with such features ask students to apply scientific reasoning to show why multiple sources of information are consistent with or support a model or theory. When you are purposefully working to grow student skills, finding a phenomenon with one theory is likely going to be less difficult for students to analyze than two. Thus, sequencing the presentations of phenomena based on feature attributes (the input to the student) is a central component of creating trajectory-based performance tasks.
Step 5: Rely on research evidence
Research is critical during this process. Making sure that basic facts are correct is essential because the goal is to connect to the real world and facilitate deeper learning as part of the work. Many times, a task developer has been sure that they understood the phenomenon, only to discover that recent research (or even old research) invalidated their entire premise! Sadly, this is not an isolated occurrence. Science adapts with new evidence. Research the topic.
Step 6: Develop the causal chain
Once a phenomenon has been selected, the intended waypoints established, and research conducted, you’re ready to develop the causal chain.
The causal chain of events is the sequence of exploration that students follow. In a standardized assessment, a more rigid chain of items is often necessary to lead the students to a conclusion. In the classroom, this can be much more freeform, allowing students to explore and provide more detailed modeling and reasoning evidence. Here are some considerations:
- Should students have choice in how they explore?
- Should they have a say in what they explore?
- Can they decide how they show what they know?
These questions are important. Students with more novice understandings of science may need more scaffolded and explicit causal chains to make an inference that a recurring pattern may signal a cause-and-effect relationship. You might need to ensure students have some critical information or truly understand a skill before applying it in a new situation. Students with more advanced understanding may find those explicit steps unnecessary and burdensome.
As you engage in mapping out these pieces you will also need to monitor if the causal chain leads from a student’s base knowledge through a sufficient series of opportunities to engage in the disciplinary core ideas, science and engineering practices, and cross-cutting concepts to arrive at the correct conclusion or an accumulation of evidence to support an argument. Engaging in this process before beginning to write the task parts will save you hours because you won’t get stuck trying to figure out how to massage the already written task and items so they fit. It’s better to have ensured the fit and 3D standard interaction points first!
If you’re interested in designing your own trajectory-based tasks for the Next Generation Science Standards (NGSS), remember to:
- Define your big idea and document the evidence you will use to understand if students are growing before you explore and research the phenomena. It will make the process more efficient
- Ensure the waypoints that show differentiators in student thinking are identified, which allows you to purposefully target and direct the causal chain to those points
- Once you’re developing the tasks, give students opportunities to have choice in showing what they understand
Developing trajectory-based performance tasks is complex, but evidence from the field suggests students find the opportunities to think like a scientist engaging. This only serves to increase student motivation, which can help them be successful students of science—in your classroom and throughout their academic career.
Paul Nichols coauthored this post. He joined NWEA as director of Assessment Design in 2019. Earlier in his career, he worked at ACT, the Center for Next Generation Learning and Assessment at Pearson, and the National Center for the Improvement of Educational Assessment. Dr. Nichols specializes in writing about and researching the design of assessments targeting complex thinking. He holds a PhD and an MA in educational psychology from the University of Iowa.