Course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course.
Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes.
“Assessment” refers to a variety of processes for gathering, analyzing, and using information about student learning to support instructional decision-making, with the goal of improving student learning. Most instructors already engage in assessment processes all the time, ranging from informal (“hmm, there are many confused faces right now- I should stop for questions”) to formal (“nearly half the class got this quiz question wrong- I should revisit this concept”).
When approached in a formalized way, course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course. Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes. Course-level assessment can be a practical process embedded within course design and teaching, that provides substantial benefits to instructors and students.
The course assessment cycle, illustrated above, helps instructors identify areas in which students excel in the current course design, and others in which they may struggle. This allows the instructor to reallocate time from easier skills or topics to more challenging ones, and to design activities that guide and support students’ learning where they need it most.
Over time, as the process is followed iteratively over several semesters, it can help instructors find a variety of pathways to designing more equitable courses in which more learners develop greater expertise in the skills and knowledge of greatest importance to the discipline or topic of the course.
Differentiating Grading from Assessment
“Assessment” is sometimes used colloquially to mean “grading,” but there are distinctions between the two. Grading is a process of evaluating individual student learning for the purposes of characterizing that student’s level of success at a particular task (or the entire course). The grade of an assignment may provide feedback to students on which concepts or skills they have mastered, which can guide them to revise their study approach, but may not be used by the instructor to decide how subsequent class sessions will be spent. Similarly, a student’s grade in a course might convey to other instructors in the curriculum or prospective employers the level of mastery that the student has demonstrated during that semester, but need not suggest changes to the design of the course as a whole for future iterations.
In contrast to grading, assessment practices focus on determining how many students achieved which learning course outcomes, and to what level of mastery, for the purpose of helping the instructor revise subsequent lessons or the course as a whole for subsequent terms. Since final course grades may include participation points, and aggregate student mastery of all course learning objectives into a single measure, they rarely give clarity on what elements of the course have been most or least successful in achieving the instructor’s goals. Differentiating assessment from grading allows instructors to plot a clear course forward toward making the changes that will have the greatest impact in the areas they define as being most important, based on the results of the assessment.
Course learning outcomes are measurable statements that describe what students should be able to do by the end of a course. Let’s parse this statement into its three component parts: student-centered, measurable, and course-level.
First, learning outcomes should focus on what students will be able to do, not what the course will do. For example:
- “Introduces the fundamental ideas of computing and the principles of programming” says what a course is intended to accomplish. This is perfectly appropriate for a course description but is not a learning outcome.
- A related student learning outcome might read, “Explain the fundamental ideas of computing and identify the principles of programming.”
Second, learning outcomes are measurable, which means that you can observe the student performing the skill or task and determine the degree to which they have done so. This does not need to be measured in quantitative terms—student learning can be observed in the characteristics of presentations, essays, projects, and many other student products created in a course (discussed more in the section on rubrics below).
To be measurable, learning outcomes should not include words like understand, learn, and appreciate, because these qualities occur within the student’s mind and are not observable. Rather, ask yourself, “What would a student be doing if they understand, have learned, or appreciate?” For example:
- “Learners should understand US political ideologies regarding social and environmental issues,” is not observable.
- “Learners should be able to compare and contrast U.S. political ideologies regarding social and environmental issues,” is observable.
Finally, learning outcomes for course-level assessment focus on the knowledge and skills that learners will take away from a course as a whole. Though the final project, essay, or other assessment that will be used to measure student learning may match the outcome well, the learning outcome should articulate the overarching takeaway from the course, rather than describing the assignment. For example:
- “Identify learning principles and theories in real-world situations” is a learning outcome that describes skills learners will use beyond the course.
- “Develop a case study in which you document a learner in a real-world setting” describes a course assignment aligned with that outcome but is not a learning outcome itself.
Identify and Prioritize Your Higher-Order End Goals
Course-level learning outcomes articulate the big-picture takeaways of the course, providing context and purpose for day-to-day learning. To keep the workload of course assessment manageable, focus on no more than 5-10 learning outcomes per course (McCourt, 2007). This limit is helpful because each of these course-level learning objectives will be carefully assessed at the end of the term and used to guide iterative revision of the course in future semesters.
This is not meant to suggest that students will only learn 5-10 skills or concepts during the term. Multiple shorter-term and lower-level learning objectives are very helpful to guide student learning at the unit, week, or even class session scale (Felder & Brent, 2016). These shorter-term objectives build toward or serve as components of the course-level objectives.
Bloom’s Taxonomy of Educational Objectives (Anderson & Krathwohl, 2001) is a helpful tool for deciding which of your objectives are course-level, which may be unit-to class-level objectives, and how they fit together. This taxonomy organizes action verbs by complexity of thinking, resulting in the following categories:
Download a list of verbs organized into Bloom’s Taxonomy.
Download a list of sample learning outcomes from a variety of disciplines.
Typically, objectives at the higher end of the spectrum (“analyzing,” “evaluating,” or “creating”) are ideal course-level learning outcomes, while those at the lower end of the spectrum (“remembering,” “understanding,” or “applying”) are component parts and day, week, or unit-level outcomes. Lower-level outcomes that do not contribute substantially to students’ ability to achieve the higher-level objectives may fit better in a different course in the curriculum.
Consider Involving Your Learners
Depending on the course and the flexibility of the course structure and/or progression, some educators spend the first day of the course working with learners to craft or edit learning outcomes together. This practice of giving learners an informed voice may lead to increased motivation and ownership of learning.
Alignment, where all components work together to bolster specific student learning outcomes, occurs at multiple levels. At the course level, assignments or activities within the course are aligned with the daily or unit-level learning outcomes, which in turn are aligned with the course-level objectives. At the next level, the learning outcomes of each course in a curriculum contribute directly and strategically to programmatic learning outcomes.
Alignment Within the Course
Since learning outcomes are statements about key learning takeaways, they can be used to focus the assignments, activities, and content of the course (Wiggins & McTighe, 2005). Biggs & Tang (2011) note that, “In a constructively aligned system, all components… support each other, so the learner is enveloped within a supportive learning system.”
Refining alignment is an iterative process, and alignment can nearly always be enhanced. As you design a course or learning experience, check for gaps: Have you articulated outcomes that are not represented in content, activities, and assessments? Are you including content, activities, and assessments that do not map well to outcomes? Answers to these questions can help you make conscious choices about which outcomes, concepts, or activities to retain or add to a course and which to remove or shift to another learning experience in the program.
For example, for the learning outcome, “learners should be able to collaborate effectively on a team to create a marketing campaign for a product,” the course should: (1) intentionally teach learners effective ways to collaborate on a team and how to create a marketing campaign; (2) include activities that allow learners to practice and progress in their skillsets for collaboration and creation of marketing campaigns; and (3) have assessments that provide feedback to the learners on the extent that they are meeting these learning outcomes.
Alignment With Program
When developing your course learning outcomes, consider how the course contributes to your program’s mission/goals (especially if such decisions have not already been made at the programmatic level). If course learning outcomes are set at the programmatic level, familiarize yourself with possible program sequences to understand the knowledge and skills learners are bringing into your course and the level and type of mastery they may need for future courses and experiences. Explicitly sharing your understanding of this alignment with learners may help motivate them and provide more context, significance, and/or impact for their learning (Cuevas, Matveevm, & Miller, 2010).
If relevant, you will also want to ensure that a course with NUpath attributes addresses the associated outcomes. Similarly, for undergraduate or graduate courses that meet requirements set by external evaluators specific to the discipline or field, reviewing and assessing these outcomes is often a requirement for continuing accreditation.
See our program-level assessment guide for more information.
Sharing course learning outcomes with learners makes the benchmarks for learning explicit and helps learners make connections across different elements within the course (Cuevas & Mativeev, 2010). Consider including course learning outcomes in your syllabus, so learners know what is expected of them by the end of a course and can refer to the outcomes as the term progresses. When educators refer to learning outcomes during the course before introducing new concepts or assignments, learners receive the message that the outcomes are important and are more likely to see the connections between the outcomes and course activities.
Formative assessment practices are brief, often low-stakes (minimal grade value) assignments administered during the semester to give the instructor insight into student progress toward one or more course-level learning objectives (or the day-to unit-level objectives that stair-step toward the course objectives). Common formative assessment techniques include classroom discussions, just-in-time quizzes or polls, concept maps, and informal writing techniques like minute papers or “muddiest points,” among many others (Angelo & Cross, 1993).
Refining Alignment During the Semester
While it requires a bit of flexibility built into the syllabus, student-centered courses often use the results of formative assessments in real time to revise upcoming learning activities. If students are struggling with a particular outcome, extra time might be devoted to related practice. Alternatively, if students demonstrate accomplishment of a particular outcome early in the related unit, the instructor might choose to skip activities planned to teach that outcome and jump ahead to activities related to an outcome that builds upon the first one.
Supporting Student Motivation and Engagement
Formative assessment and subsequent refinements to alignment that support student learning can be transformative for student motivation and engagement in the course, with the greatest benefits likely for novices and students worried about their ability to successfully accomplish the course outcomes, such as those impacted by stereotype threat (Steele, 2010). Take the example below, in which an instructor who sees that students are struggling decides to dedicate more time and learning activities to that outcome. If that instructor were to instead move on to instruction and activities that built upon the prior learning objective, students who did not reach the prior objective would become increasingly lost, likely recognize that their efforts at learning the new content or skill were not helping them succeed, and potentially disengage from the course as a whole.
Beyond allowing the instructor to make supportive refinements to instruction, formative assessment provides specific information to students about their learning and performance throughout the semester. Based on that information, students can tailor their own practice and study efforts, resulting in increasing success over time, and maintaining their motivation to engage in the course (Cauley & McMillan, 2010). The full “virtuous cycle” of formative assessment is represented in the figure above.
Artifacts for Summative Assessment
To determine the degree to which students have accomplished the course learning outcomes, instructors often assign some form of project, essay, presentation, portfolio, renewable assignment, or other cumulative final. The final product of these activities could serve as the “artifact” that is assessed. In this context, alignment is particularly critical—if this assignment does not adequately guide students to demonstrate their achievement of the learning outcomes, the instructor will not have concrete information to guide course design for future semesters. To keep assessment manageable, aim to design a single final assignment that create the space for students to demonstrate their performance on multiple (if not all) course learning outcomes.
Since not all courses are designed with a final assignment that allows students to demonstrate their highest level of achievement of all course learning outcomes, the assessment processes could use the course assignment that represents the highest level of achievement that students had an opportunity to demonstrate during the term. However, some learning objectives that do not come into play during the final may be better categorized as unit-level, rather than course-level, objectives.
Direct vs. Indirect Measures of Student Learning
Some instructors also use surveys, interviews, or other methods that ask learners whether and how they believe they have achieved the learning outcomes. This type of “indirect evidence” can provide valuable information about how learners understand their progress but does not directly measure students’ learning. In fact, novices commonly have difficulty accurately evaluating their own learning (Ambrose et al., 2010). For this reason, indirect evidence of student learning (on its own) is not considered sufficient for summative assessment.
Together, direct and indirect evidence of student learning can help an instructor determine whether to bolster student practice in certain areas or whether to simply focus on increasing transparency about when students are working toward which learning outcome.
Creating and Assessing Student Work with Analytic Rubrics
One tool for assessing student work is analytic rubrics (shown below) which are matrices of characteristics and descriptions of what it might look like for student products to demonstrate these characteristics at different levels of mastery. Analytic rubrics are commonly recommended for assessment purposes, since they provide more detailed feedback to guide course design in more meaningful ways than holistic rubrics. Pre-existing analytic rubrics such as the AAC&U VALUE Rubrics can be tailored to fit your course or program, or you can develop an outcome-specific rubric yourself (Moskal, 2000 is a useful reference, or contact CATLR for a one-on-one consultation). The process of refining a rubric often involves multiple iterations of applying the rubric to student work and identifying the ways in which it captures or does not capture the characteristics representing the outcome.
Once you have selected or created appropriate analytic rubrics for each of your course outcomes, you can apply them to final student products to determine how many students achieved which learning outcomes, and to what degree. An instructor might perform this task independently or might enlist the help of a colleague or teaching assistant who can apply the rubric to the same student work and discuss any discrepancies (a process which nearly always results in higher fidelity to the rubric). In multi-section courses, all the course instructors may gather all sections’ student work together and apply the rubric to the larger pool of student work (after removing names and other personal student identifiers).
Summative assessment results can inform changes to any of the course components for subsequent terms. If students have underperformed on a particular course learning objective, the instructor might choose to revise the related assignments or provide additional practice opportunities related to that objective, and formative assessments might be revised or implemented to test whether those new learning activities are producing better results. If the final assessment does not provide sufficient information about student performance on a certain outcome, the instructor might revise the assessment guidelines or even implement a different assessment that is more aligned to the outcome. Finally, if an instructor notices during the assessment process that an important outcome has not been articulated, or would be more clearly stated a different way, that instructor might revise the objectives themselves.
For assistance at any stage of the course assessment cycle, contact CATLR for a one-on-one or group consultation.
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: John Wiley & Sons.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York, NY: Longman.
Bembenutty, H. (2011). Self-regulation of learning in postsecondary education. New Directions for Teaching and Learning, 126, 3-8. doi: 10.1002/tl.439
Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University. Maidenhead, England: Society for Research into Higher Education & Open University Press.
Cauley, K. M., & McMillan, J. H. (2010). Formative assessment techniques to support student motivation and achievement. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 83(1), 1-6. doi: 10.1080/00098650903267784
Cuevas, N. M., Matveev, A. G., & Miller, K. O. (2010). Mapping general education outcomes in the major: Intentionality and transparency. Peer Review, 12(1), 10-15.
Felder, R. M., & Brent, R. (2016). Teaching and learning STEM: A practical guide. San Francisco, CA: John Wiley & Sons.
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into practice, 41(4), 212-218. doi: 10.1207/s15430421tip4104_2
McCourt, Millis, B. J., (2007). Writing and Assessing Course-Level Student Learning Outcomes. Office of Planning and Assessment at the Texas Tech University. Retrieved from https://www.depts.ttu.edu/opa/resources/docs/Writing_Learning_Outcomes_Handbook3.pdf.
Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7(3).
Setting Learning Outcomes. (2012). Center for Teaching Excellence at Cornell University. Retrieved from https://teaching.cornell.edu/teaching-resources/designing-your-course/setting-learning-outcomes.
Steele, C. M. (2010). Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. New York, NY: WW Norton & Company, Inc.
Wiggins, G., & McTighe, J. (2005). Understanding by Design (Expanded). Alexandria, US: Association for Supervision & Curriculum Development (ASCD).