AI Hasn’t Changed Our Teaching Goals—It’s Changed the Learning Context
Featured Faculty Essay by Laurie Nardone & Samuel V. Scarpino

Our new AI reality
We come to this conversation from different corners of the university. One of us directs AI and Life Sciences in the Institute for Experiential AI (EAI), the other the Writing Program. We have distinct disciplinary lenses and ways of knowing, but in our months-long conversation about teaching and learning in this new AI age, we’ve found that we share a common concern: our classrooms and labs have changed. We feel it when reading student papers, reviewing problem sets, skimming discussion posts, or developing assignments. Students are outsourcing their work to AI. It’s disappointing and overwhelming.
In writing classes, Laurie misses seeing nuance and notices more generalizations and triple adjectives. On the research side, Sam struggles with how to mentor students around the appropriate use of AI in developing code, navigating the scientific literature, and preparing for interviews. What’s true for both of us is that the pace of AI innovation and implementation is faster than our teaching systems are used to moving. And it’s unlikely to slow. Because we can’t keep up, our instinct is to prohibit AI use to preserve our longstanding practices.
But what has really changed? Sure, today’s large language models (LLMs) can interact with text, speech, data, and code in ways most of us couldn’t have imagined three years ago, but that doesn’t affect our goals. Students have always arrived in our classrooms and research labs needing to recognize nuance more clearly, transcend generalization, and assemble and communicate evidence more convincingly.
What remains the same?
Instead of asking what today’s AI tools are taking away from our teaching, let’s underscore how we already support student learning. Good teaching embraces process – we want students to “show their work,” and asking them to explain how they got to where they are emphasizes metacognition. Making thinking visible was good pedagogy before modern AI, and it still is today.
At Northeastern, our experiential learning emphasis has centered on process over product for over 100 years, and so a recommitment to these practices can happen naturally here. If we recommit to making the process of learning visible, we can more readily see that it is the context, not the goal for learning, that has changed in the age of AI.
This new AI context does challenge how we approach excellent teaching and assessment; students don’t learn well when they copy/paste solutions from AI or use agentic browsers to automate the entire process. However, we also suspect that AI can enhance learning when properly contextualized and integrated into well-established and successful pedagogical practices.
How do we get started?
We need to rethink our assignments and approach to evaluation (especially those for online courses), and we may have to rethink student skills. Still, AI and AI-enhanced tools are not themselves misaligned with what we know about how people learn. Students use AI, so how might students use AI in tandem with good learning? Can we leverage AI to expand their capabilities, or will we sit by and let their critical thinking skills atrophy?
We know embracing the changes AI has brought is a tough sell to our colleagues; we, too, are concerned about this new AI reality, overwhelmed by the sheer volume of what we don’t know. We also acknowledge the uncertainty that comes with change. We will make mistakes along the way. That’s part of the process of learning.
We see three entry points for faculty:
No AI. In the current environment, this might be the most challenging position to take. In the Writing Program, some faculty have “No AI” policies, though these policies naturally invite plenty of discussion about AI. In EAI, we often work on projects where certain kinds of AI cannot be used. These environments foreground each program’s longstanding practices of process/project-based learning (through scaffolding and multi-stage feedback). There are also classes where students already work on written problem sets or in topical areas where AI probably can’t help much (if any). But these settings will be in the minority.
AI-Curious. We’ve had good results using Claude as a teaching tool by asking students to identify errors or weaknesses in its responses. We can ask students to identify credible sources in support of an argument or hypothesis, the old-fashioned way (e.g., the library!), and compare the quality of those sources with what Claude (or more advanced AI research tools like Edison Platform) suggests. Again, this is learning by doing.
AI, all-in. Strategically using AI for brainstorming or as another voice in a writing or problem-solving conversation (alongside student and teacher feedback) can offer another opportunity. Given how functional AI speech and speech recognition have become (at least in English), why not have the students code while the AI acts as the “peer” suggesting next steps? We acknowledge that treating AI as a “peer” raises complex questions. Embracing these challenging topics should be a priority for our faculty.
Each of these entry points emphasizes the same sound pedagogical approach we’d advocate regarding any piece of technology – it should enhance, and not replace, our process and focus on experiential learning. Teaching our students to have the healthy skepticism we currently have is vital, but many of us lack a genuine understanding of what’s happening with these tools. Maybe we also need to spend some time learning.
What’s next?
In the context of AI, we should admit that the only sure thing is change. Continuing to design and implement learning experiences that achieve our goals of process will anchor us in great pedagogy, whatever the next AI release may bring. In the Writing Program, for example, we assess process–how students read and analyze complex texts, how they synthesize information from multiple sources, how they engage with their peers, with us, with sources, and even, sometimes, with AI–rather than simply assessing a final product. The same focus on process over product is true for the students we mentor in EAI and for the lifelong learners we teach in the custom education courses led by our colleagues at the Roux Institute.
AI will impact the process of research and work. Therefore, as disciplinary experts, we must all assess how AI impacts our field and is used in professional settings. We must then bring these processes back into the classroom. Part of the vision behind EAI is that learning these new processes requires developing deep, meaningful partnerships with industry. What we’ve seen is that our greatest strength lies in each other. The intersection of our disciplines and the synergies with experiential learning and advanced technology, as President Aoun called it, humanics, provides a proven framework for learning these new processes and integrating them into our teaching and mentoring.
AI has changed our classrooms. It has changed our research labs. Indeed, all aspects of our university, from operations and admissions to teaching and research, have changed. Things will never be the way they were before ChatGPT. We cannot afford to feel disempowered. Sure, AI will keep getting better and better at mimicking human work. With each passing semester, these technologies will become more tightly woven into our lives (and software), and online education will require the most effort to redesign. But our teaching, research, and learning goals will guide us as we learn and adapt.
Return to to Tips, Tutorials, & Recordings