Transparent Assignment Design

Making Explicit the Purpose, Process, and Criteria of Every Assignment

Overview

“What exactly do you want?” It’s one of the most common questions students ask. When students can’t figure out what an assignment is asking them to do, they spend their mental energy decoding instructions rather than learning. That confusion dilutes motivation and should signal that the assignment’s expectations are unclear.

Developed through the Transparency in Learning and Teaching (TILT) framework, transparent assignment design asks you to make three things explicit before students begin any assignment: 

  1. The purpose: why they’re doing the work
  2. The task: what steps to take
  3. The criteria for success: what quality looks like

The idea is straightforward and the impact can be significant. A multi-institutional study found that applying transparent design to just two assignments per course produced measurable gains in students’ academic confidence, sense of belonging, and awareness of their own skills. The largest group to benefit from these practices are first-generation, low-income, and underrepresented students (Winkelmes et al., 2016).

The presence of AI makes this approach even more relevant. When students can produce polished essays, code, or analyses in seconds, the pedagogical question shifts making it clear what they actually learn by doing the work. When the purpose of an assignment is genuinely clear, students can make informed decisions about how and whether to use AI. 

This guide walks through what transparent assignment design looks like in practice, how it connects to what we know about learning, and how it helps you navigate AI-related challenges in your courses.

In this guide

What makes an assignment transparent?

The TILT framework centers on three elements, each communicated before students begin work.

Purpose explains why students are doing the assignment. A strong purpose statement goes beyond restating a learning objective. It answers three questions: 

  • What skills will you practice?
  • What knowledge will you build?
  •  Why does this matter beyond this course? 

When students understand the “why,” they’re more likely to invest effort—and less likely to look for shortcuts.

Task describes what students should actually do. Rather than simply naming the final product (“Write an analysis”), a transparent task lays out the process: 

  1. What steps to follow
  2. Roughly how long each might take
  3. What common mistakes to avoid

This is where to make expert knowledge visible. The steps that seem obvious to you such as how to approach a dataset, how to structure an argument, how to read a primary source are often invisible to students who haven’t done this kind of work before. Even if it has been covered in class or other resources, laying it out directly here can have significant benefits.

Criteria for success show students what quality looks like. Share rubrics in advance. Provide checklists that support self-assessment. Most importantly, share multiple examples of successful work that take different approaches. A single example invites imitation. Multiple examples show students that criteria define quality and there are many paths to strong work.

While not new on their own, TILT adds the discipline of making them explicit and specific for every assignment, rather than assuming students will fill in the gaps on their own.

return to top

Why does transparency support learning?

Transparent design supports learning in several ways according to the research.

It reduces wasted cognitive effort. When students have to decode ambiguous instructions or guess what you really want, they’re spending limited working memory on the wrong things. Clear instructions free up mental resources for the actual learning (Sweller, 1988). This matters most for students who are newer to academic work and don’t yet have the background knowledge to fill in unstated expectations.

It scaffolds self-regulation. Students can’t plan effectively if they don’t understand the task, and they can’t monitor their progress without criteria to compare their work against. Transparent design gives students the structure they need to set goals, track their own progress, and reflect on what’s working—skills that many college students have never been explicitly taught (Zimmerman, 2002).

It builds motivation. Motivation depends on two beliefs: “Can I succeed at this?” and “Is this worth doing?” (Eccles & Wigfield, 2000). Clear criteria make success feel attainable. Purpose statements connect assignments to meaningful goals. Ambiguous assignments can undermine both because students can’t form accurate expectations when they don’t know what’s expected, and they can’t see value in work whose purpose is opaque.

It levels a playing field that was never level. Some students arrive at college having absorbed the unwritten rules of academic work through family and prior schooling. They know what “analyze” means in an assignment guideline, how to read a rubric, and what office hours are for. First-generation students, students from under-resourced schools, and students navigating cultural transitions often don’t have that inherited map. Transparent design makes the hidden curriculum visible and research shows that while all students benefit, underrepresented students benefit the most (Winkelmes et al., 2016).

return to top

How to write transparent assignments

The practical appeal of transparent design is that you can start with a single assignment.

Writing a purpose statement

Move beyond restating the catalog description. 

Instead of “This assignment addresses Course Objective 3,” try something like: “This assignment asks you to practice identifying and evaluating competing interpretations of historical evidence. You will use this skill in the final research project.  Professionals in law, policy, and consulting consistently value humanities graduates for their research skills.”

A strong purpose statement answers: What skills will you practice? What knowledge will you build? Why does this matter beyond this course?

Writing a task description

Shift from describing the product to describing the process. 

Instead of “Write an analysis of the dataset,” try: “First, examine the dataset for patterns, outliers, and missing values (about 30 minutes). Then select two variables and formulate a hypothesis about their relationship. Run the appropriate statistical test and write a 500-word interpretation that addresses whether the data support your hypothesis, identifies at least one limitation, and proposes a follow-up question.”

A process description helps students understand how to get there while a product description only tells students what to submit. 

Sharing criteria and examples

Share rubrics before students begin work. 

Provide at least two examples of successful work that take different approaches. For instance, in an art history course, this might mean providing one visual analysis and one social-historical analysis. They both meet the criteria through different pathways. 

Discuss examples in class so students can see how each one meets the standard in its own way. A recent meta-analysis found that rubrics produce moderate positive effects on academic performance, but the effects are stronger when rubrics are actively discussed rather than simply handed out (Panadero et al., 2023).

return to top

Transparent design in the age of AI

Generative AI exposes a long-existing challenge in education in that many assignments focus more on producing artifacts than about learning. While transparent design doesn’t solve every AI challenge, it does address the foundational question AI forces: What is the student supposed to learn by doing this work?

Clarify AI expectations at the assignment level

One of the most common sources of student confusion and inadvertent integrity violations is inconsistency across courses. A student taking four classes may encounter four different expectations about AI, often buried in the syllabus. The TILT framework gives you a natural structure for integrating AI guidance into individual assignments:

At the purpose level, explain what skills the assignment develops. This makes the rationale for AI boundaries self-evident. “This assignment asks you to practice close reading—a skill that requires your direct engagement with the text. Using AI to generate your analysis would bypass the cognitive work the assignment is designed to produce” is more grounded and more persuasive than “AI use is prohibited.”

At the task level, specify where in the process AI may or may not be used, and what documentation you expect. You might allow AI for brainstorming but require students to draft their analysis independently. Or you might ask students to use AI to generate an initial response, then evaluate and revise it against course readings.

At the criteria level, signal what you’ll evaluate and what AI can’t provide. Criteria that emphasize personal reflection, engagement with course-specific materials, and original interpretation create assessment targets that AI alone can’t hit.

return to top

Tips for getting started

Start with one assignment.Choosing just a single assignment is a modest investment that can produce measurable effects. Pick an assignment where student confusion or underperformance is highest.

Try the 10-minute version. If you can’t rewrite assignment guidelines mid-semester, try a classroom activity.  Have students work in small groups to read your existing assignment instructions and identify the purpose, task, and criteria. Then debrief and clarify in real time. This requires no revised documents and takes a short portion of class time that may pay dividends on follow up questions and grading.

Ask a colleague to review. Before distributing a revised assignment, ask a colleague to read it and tell you whether they can identify the purpose, task, and criteria. If they can’t, your students probably can’t either.

Use informative titles. Give assignment titles that preview the purpose. “Scientific Evidence Poster” becomes “Evaluating Posters for Scientific Evidence.” A clear title sets expectations before students read a single instruction.

Share multiple examples. Provide at least two examples of successful work that take different approaches. This prevents students from thinking there’s one “correct” way to complete the assignment.

Pay attention to the signals. Once you’ve redesigned an assignment, notice whether students ask fewer clarifying questions. Notice whether office-hour conversations shift from “What do you want?” to “Here’s what I’m thinking. Does this approach work?” These are signs that transparency is reducing confusion and redirecting effort toward learning.

return to top

A Redesign Example 

Original instructions for a policy brief assignment for a public health course:  

“Write a 1,500-word policy brief on a public health topic of your choice. Due Week 10.”

Residesign

Purpose: This assignment asks you to practice translating research evidence into actionable recommendations for a specific audience which is a core skill in public health practice. You’ll learn to identify the strongest available evidence, weigh competing policy options, and write for decision-makers who need clear, concise guidance. This is the kind of work public health professionals do when advising city councils, hospital administrators, or nonprofit boards.

Task: “Choose a public health issue relevant to your community. Then: 

  1. Identify three peer-reviewed sources and one gray-literature source that address the issue (about 2 hours). 
  2. Write a one-paragraph problem statement that includes local data (about 30 minutes). 
  3. Draft two competing policy recommendations, each supported by evidence from your sources (about 2 hours).
  4. Write a final recommendation explaining which option you endorse and why, including at least one limitation of your recommendation (about 1 hour). 
  5. Format your brief using the template provided on Canvas. If you use AI tools for any part of this process, document what you used, how you used it, and how you evaluated the output.

Criteria: A rubric (see below) is provided that is organized around four dimensions: strength of evidence, clarity for a non-expert audience, feasibility of recommendations, and quality of reasoning about trade-offs. Two example briefs from a previous semester are also shared. The first took a data-heavy approach and the second led with a narrative case study. Both met the criteria through different strategies.

Example Rubric

DimensionBeginning (1)Developing (2)Proficient (3)Excellent (4)
Strength of Evidence

(25 points)

Relies on one source or uses sources that lack credibility. Claims are unsupported or supported only by general assertions.Includes multiple sources but does not consistently evaluate their quality. Some claims lack specific supporting evidence.Draws on at least three credible sources, including peer-reviewed and gray literature. Most claims are supported with specific evidence.Draws on well-chosen peer-reviewed and gray-literature sources. Every major claim is grounded in specific evidence, and the student distinguishes between stronger and weaker findings.
Clarity for a Non-Expert Audience

(25 points)

Written in academic or technical language that a decision-maker would struggle to act on. Structure does not follow policy brief conventions.Mostly readable, but sections lapse into jargon or assume background knowledge the audience may not have. Organization is present but uneven.Written in clear, direct language appropriate for the intended audience. Follows the policy brief template and presents information in a logical sequence.Anticipates what the audience needs to know and in what order. Uses precise, accessible language throughout. A decision-maker could read this and understand both the problem and the recommended action without additional context.
Feasibility of Recommendations

(25 points)

Presents a single recommendation without considering alternatives, or recommendations are too vague to evaluate.Presents two options but does not compare them substantively. Final recommendation lacks a clear rationale or ignores practical constraints.Compares two competing recommendations using evidence. Endorses one with a clear rationale and identifies at least one limitation or trade-off.Compares recommendations with attention to evidence, cost, political feasibility, or implementation challenges. Final endorsement is well-reasoned, and limitations are discussed with specificity rather than as a formality.
Quality of Reasoning About Trade-Offs

(25 points)

Does not acknowledge trade-offs, competing values, or limitations of the recommended approach.Acknowledges that trade-offs exist but treats them superficially or as an afterthought rather than integrating them into the analysis.Engages with genuine tensions in the policy landscape. Explains why the recommended option is preferable despite its limitations.Demonstrates nuanced reasoning about competing priorities—such as cost vs. reach, short-term vs. long-term impact, or political feasibility vs. effectiveness. Uses trade-off analysis to strengthen rather than qualify the recommendation.
Total Score

return to top

Shorter examples across disciplines

Biology. A lab report assignment can explain why students are learning to present data visually and walk them through each section of the report in sequence. Sharing two sample reports that handle data presentation differently while both demonstrating sound reasoning helps students see that the criteria define quality over a single correct format.

Computer science. A weekly coding assignment can use TILT principles to specify what to build, why each exercise targets particular programming concepts, and what common errors to watch for. Sharing criteria that distinguish between working code and well-structured code helps students focus on the reasoning practices the exercise is designed to develop. 

Economics. For a data analysis assignment, specify that students are learning to identify patterns in real-world economic data and communicate findings to non-specialists. Laying out steps from cleaning a dataset through writing an interpretation, with criteria focused on reasoning rather than statistical sophistication, keeps the emphasis on the skills the assignment is designed to build.

Graphic design. A branding project can explain that students are learning to make visual choices that solve a communication problem for a specific audience. If the emphasis is to foreground strategic thinking alongside craft, consider the following approach for design. Describe the process from researching the client’s context through developing initial concepts, revising based on critique, and presenting a written rationale. Criteria that evaluate how well students justify their decisions and signal that the reasoning matters as much as the deliverable.

Mechanical engineering. A design project can specify that students are learning to balance competing constraints like cost, safety, and manufacturability. Outlining the sequence from defining requirements through prototyping and testing, with evaluation criteria that weigh the design rationale as heavily as the technical solution, signals that the thinking behind the prototype matters.

return to top

References

Eccles, J. S., & Wigfield, A. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25(1), 68–81. https://doi.org/10.1006/ceps.1999.1015

Panadero, E., Jonsson, A., & Botella, J. (2023). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: A meta-analytic review. Educational Psychology Review, 35, 113. https://doi.org/10.1007/s10648-023-09823-4

Winkelmes, M.-A., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(1/2), 31–36.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

return to top