Process-Focused Learning
Assess how students think, not just what they produce
Overview
Generative AI can draft essays, analyze data, and write code in seconds. That shift is changing what it means to do knowledge work and it raises a practical question: if AI can produce polished outputs, what should we actually be assessing?
Process-focused learning offers one answer. Rather than evaluating only what students hand in, it asks you to look at how they get there: their reasoning, their decisions, and the thinking behind their work. This isn’t a new idea. Decades of research show that when students reflect on their learning processes, they develop stronger metacognitive skills and retain what they learn longer (Zimmerman, 1989).
What’s new is the urgency. In a world where AI can generate convincing final products, the learning happens in the process, not the deliverable. Process-focused assessment helps you see that learning and gives students less incentive to shortcut it.
This guide walks through what process-focused learning looks like in practice, how it differs from more familiar outcome-focused approaches, and concrete strategies you can adapt to your own courses.
In this guide:
- What is process-focused learning?
- Outcome-focused vs. process-focused: a comparison
- Four strategies for process-focused assessment
- Tips for success and pitfalls to avoid
- Practice in action
- References
What is process-focused learning?
Most assignments ask students to demonstrate what they know through a final product: a paper, a presentation, a problem set. That product-based approach works well for measuring knowledge acquisition, but it reveals little about the thinking that produced it.
Process-focused learning shifts some of that attention from the destination to the journey. It asks students to make their reasoning visible, whether through reflection, iteration, or documentation of their choices. The goal is to treat thinking processes as worth assessing in their own right and in connection with final deliverables.
This approach has always been valuable, but generative AI makes it newly relevant. When students can generate polished outputs with minimal cognitive engagement, the process becomes where learning actually happens. As Bowen and Watson (2024) argue, educators need to focus on the thinking skills students bring to their work, not just the artifacts they produce.
Outcome-focused vs. process-focused: a comparison
In practice, most assignments blend elements of both approaches. The table below highlights the key differences. Think of it as a menu: you might incorporate one or two process-focused features into an existing assignment rather than redesigning everything at once.
| Dimension | Outcome-focused | Process-focused |
| Assessment frequency | One summative assessment | Multiple low-stakes checkpoints |
| Stakes | High (major grade impact) | Lower (emphasis on growth) |
| What students submit | Final product only | Product plus rationale, process notes, or reflection |
| Purpose of feedback | Justify the grade | Guide improvement |
| Who evaluates | Instructor | Instructor, self, peers, or external stakeholders |
| Assessment criteria | Quality of the product | Quality of the product and the skills demonstrated |
| Self-reflection | Optional or absent | Central component |
Four strategies for process-focused assessment
The following strategies can help you shift attention toward learning processes. You don’t need to adopt all of them. Start with one that fits naturally into an assignment you’re already planning.
1. Build in iteration and feedback loops
Break larger assignments into stages, with opportunities for feedback at each step. A research paper might move from topic proposal to annotated bibliography to rough draft to final submission, with instructor or peer feedback along the way. This mirrors how professionals actually work: writers revise, engineers prototype, designers iterate. When students receive feedback midstream and apply it to subsequent versions, they develop transferable skills.
Feedback loops benefit you too. As Sadler (1989) noted, the only way to know whether feedback leads to learning is for students to act on it. When you see how students respond to your comments, you learn what’s working and what needs clarification.
2. Add a reflection or rationale component
Ask students to explain their thinking alongside their work. This could be a brief cover memo describing their approach, a reflection on what they learned, or annotations explaining key decisions. The format matters less than making reasoning visible.
Sample reflection prompts:
- What strategies contributed most to your progress on this assignment?
- Where does your work meet the criteria, and where does it fall short?
- If you used AI tools, how did you use them, and what did you learn about prompting effectively?
- What would you do differently next time?
These questions move students from passive submission to active self-assessment. They also give you a window into how students approached the work, which can be especially valuable when AI is part of the picture.
3. Design assignments that make AI use visible
If you allow or encourage AI use, build in transparency requirements. Ask students to document their prompts, evaluate the AI’s output, and explain how they refined or extended it. This reframes AI from a potential shortcut into a tool for developing critical evaluation skills.
For example, you might ask students to prompt an AI to take a position on a contested issue, then argue the opposing view. Or have them generate practice exam questions and evaluate whether the questions match course content. Learning happens in evaluation and iteration, not in the initial output.
When AI is part of the assignment, consider asking students to include:
- What tools they used and why
- Their prompts and the AI’s responses
- How they evaluated and modified the output
- What they learned about effective AI use
4. Assess skills alongside content
Traditional rubrics focus on final product quality. Process-focused rubrics add criteria for skills like planning, reasoning, evidence use, and collaboration. This signals that you value how students work, not just what they produce.
You don’t need to assess every skill on every assignment. Pick one or two that matter most for the learning goals at hand. A research assignment might include criteria for source evaluation. A group project might assess collaboration. A design challenge might look at how students incorporated feedback.
Sample skills criteria for a rubric:
- Plans and organizes work effectively
- Explains reasoning behind key decisions
- Uses evidence to support claims
- Incorporates feedback to strengthen work
- Uses AI tools in accordance with course guidelines
Tips for success and pitfalls to avoid
Start small—you don’t need to redesign your entire course. Add a reflection component to one assignment, or break a major project into two checkpoints with feedback. See what works before expanding.
Be explicit about what you’re assessing. If you’re assessing the process, students need to know. Share rubric criteria that include skills and reflection, not just content.
Keep feedback manageable—iterative assignments can create grading overload if you’re not strategic. Consider peer feedback, self-assessment, or brief check-ins rather than detailed comments at every stage. In large courses, consider reviewing samples and providing feedback to the class as a whole.
Avoid assessing reflection based on whether students agree with you. The goal is honest self-assessment, not performance of insight. Grade on thoughtfulness and specificity, not on reaching particular conclusions.
Watch for checkbox compliance. If reflections become formulaic, rethink your prompts. Effective reflection prompts ask students to connect their process to specific outcomes or decisions—not just describe what they did.
References
Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. Johns Hopkins University Press.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.
Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329–339.