Our new AI reality
We come to this conversation from different corners of the university. One of us directs AI and Life Sciences in the Institute for Experiential AI (EAI), the other the Writing Program. We have distinct disciplinary lenses and ways of knowing, but in our months-long conversation about teaching and learning in this new AI age, we've found that we share a common concern: our classrooms and labs have changed. We feel it when reading student papers, reviewing problem sets, skimming discussion posts, or developing assignments. Students are outsourcing their work to AI. It's disappointing and overwhelming.
In writing classes, Laurie misses seeing nuance and notices more generalizations and triple adjectives. On the research side, Sam struggles with how to mentor students around the appropriate use of AI in developing code, navigating the scientific literature, and preparing for interviews. What's true for both of us is that the pace of AI innovation and implementation is faster than our teaching systems are used to moving. And it's unlikely to slow. Because we can't keep up, our instinct is to prohibit AI use to preserve our longstanding practices.
But what has really changed? Sure, today's large language models (LLMs) can interact with text, speech, data, and code in ways most of us couldn't have imagined three years ago, but that doesn't affect our goals. Students have always arrived in our classrooms and research labs needing to recognize nuance more clearly, transcend generalization, and assemble and communicate evidence more convincingly.
What remains the same?
Instead of asking what today’s AI tools are taking away from our teaching, let's underscore how we already support student learning. Good teaching embraces process – we want students to "show their work," and asking them to explain how they got to where they are emphasizes metacognition. Making thinking visible was good pedagogy before modern AI, and it still is today.
At Northeastern, our experiential learning emphasis has centered on process over product for over 100 years, and so a recommitment to these practices can happen naturally here. If we recommit to making the process of learning visible, we can more readily see that it is the context, not the goal for learning, that has changed in the age of AI.
This new AI context does challenge how we approach excellent teaching and assessment; students don't learn well when they copy/paste solutions from AI or use agentic browsers to automate the entire process. However, we also suspect that AI can enhance learning when properly contextualized and integrated into well-established and successful pedagogical practices.
How do we get started?
We need to rethink our assignments and approach to evaluation (especially those for online courses), and we may have to rethink student skills. Still, AI and AI-enhanced tools are not themselves misaligned with what we know about how people learn. Students use AI, so how might students use AI in tandem with good learning? Can we leverage AI to expand their capabilities, or will we sit by and let their critical thinking skills atrophy?
We know embracing the changes AI has brought is a tough sell to our colleagues; we, too, are concerned about this new AI reality, overwhelmed by the sheer volume of what we don't know. We also acknowledge the uncertainty that comes with change. We will make mistakes along the way. That's part of the process of learning.
We see three entry points for faculty:
No AI. In the current environment, this might be the most challenging position to take. In the Writing Program, some faculty have "No AI" policies, though these policies naturally invite plenty of discussion about AI. In EAI, we often work on projects where certain kinds of AI cannot be used. These environments foreground each program's longstanding practices of process/project-based learning (through scaffolding and multi-stage feedback). There are also classes where students already work on written problem sets or in topical areas where AI probably can't help much (if any). But these settings will be in the minority.
AI-Curious. We've had good results using Claude as a teaching tool by asking students to identify errors or weaknesses in its responses. We can ask students to identify credible sources in support of an argument or hypothesis, the old-fashioned way (e.g., the library!), and compare the quality of those sources with what Claude (or more advanced AI research tools like Edison Platform) suggests. Again, this is learning by doing.
AI, all-in. Strategically using AI for brainstorming or as another voice in a writing or problem-solving conversation (alongside student and teacher feedback) can offer another opportunity. Given how functional AI speech and speech recognition have become (at least in English), why not have the students code while the AI acts as the "peer" suggesting next steps? We acknowledge that treating AI as a "peer" raises complex questions. Embracing these challenging topics should be a priority for our faculty.
Each of these entry points emphasizes the same sound pedagogical approach we'd advocate regarding any piece of technology – it should enhance, and not replace, our process and focus on experiential learning. Teaching our students to have the healthy skepticism we currently have is vital, but many of us lack a genuine understanding of what's happening with these tools. Maybe we also need to spend some time learning.
What's next?
In the context of AI, we should admit that the only sure thing is change. Continuing to design and implement learning experiences that achieve our goals of process will anchor us in great pedagogy, whatever the next AI release may bring. In the Writing Program, for example, we assess process–how students read and analyze complex texts, how they synthesize information from multiple sources, how they engage with their peers, with us, with sources, and even, sometimes, with AI–rather than simply assessing a final product. The same focus on process over product is true for the students we mentor in EAI and for the lifelong learners we teach in the custom education courses led by our colleagues at the Roux Institute.
AI will impact the process of research and work. Therefore, as disciplinary experts, we must all assess how AI impacts our field and is used in professional settings. We must then bring these processes back into the classroom. Part of the vision behind EAI is that learning these new processes requires developing deep, meaningful partnerships with industry. What we've seen is that our greatest strength lies in each other. The intersection of our disciplines and the synergies with experiential learning and advanced technology, as President Aoun called it, humanics, provides a proven framework for learning these new processes and integrating them into our teaching and mentoring.
AI has changed our classrooms. It has changed our research labs. Indeed, all aspects of our university, from operations and admissions to teaching and research, have changed. Things will never be the way they were before ChatGPT. We cannot afford to feel disempowered. Sure, AI will keep getting better and better at mimicking human work. With each passing semester, these technologies will become more tightly woven into our lives (and software), and online education will require the most effort to redesign. But our teaching, research, and learning goals will guide us as we learn and adapt.

In this space, we share tips from Northeastern faculty members for integrating AI into teaching and learning.
AI-Simulated Interview Practice with Thematic Analysis
Featured Faculty: Natalya Watson
Associate Teaching Professor
Global Pathways and NU Immerse Programs
College of Professional Studies
What she's doing: Natalya teaches multilingual graduate students in a research and writing course within the Global Pathways program. Students from diverse disciplines prepare to conduct interviews with industry professionals as part of their research projects. To prepare for these interviews, Natalya has students practice with AI simulations first. Students work in mixed disciplinary groups (combining project management, informatics, and other fields) to create an AI persona relevant to their research.
The process has clear stages. First, teams begin by composing interview questions and getting feedback from Claude on relevance and focus. Second, they conduct text-based interviews with their persona, and then analyze the interview transcripts. Third, they identify themes and patterns on the transcripts. Fourth, they have AI do the same analysis. Fifth, they color-code similarities and differences between student and AI-generated themes to explore how their cultural backgrounds and academic disciplines shape their analyses. Finally, students complete individual reflections on how their graduate program influences their analytical lens, which serves to prepare them for real interviews with company managers.
This interview simulation transfers to any field requiring stakeholder engagement: STEM students can interview lab managers about technical processes; humanities students can engage with historical figures; social sciences students can practice with community members; professional programs can simulate client consultations. Select discipline-appropriate personas and analyze through relevant frameworks—the human-AI comparison always reveals how training shapes interpretation. |
What's working (and what’s not): This low-stakes practice builds confidence and metacognition. Students learn to handle tricky interview moments (like when the AI persona gets too wordy) and decode unfamiliar industry terminology. The interdisciplinary groups create authentic knowledge gaps where students become the experts, teaching peers and even the instructor about field-specific concepts. One student credited the simulation as essential to successfully interviewing managers for the project. The comparison with AI analysis sparks rich discussions about how culture and training shape interpretation. One limitation is the absence of a strong voice option. While Claude does have a voice-option, it is still in its early stages of development, which means practicing oral communication skills to prepare for live interviews is not as useful at this time.
What's next: Natalya plans to add speech-enabled AI tools for verbal practice. She sees AI collaboration as a semester-long journey for developing student voice in both writing and interviews. Her goal? Help students stay "in the driver's seat" while learning to value their unique perspectives and sense of voice. Through continuous reflection and AI comparison, she hopes that students will develop more confidence in the ways in which their cultural experiences create insights that AI simply can't replicate.

Our picks of recent articles, blogs, podcasts, and other media to provoke and provide insight on opportunities and challenges with AI in teaching and learning.
SIFT for AI: Introduction and Pedagogy What does information literacy look like when students can generate instant "expert" analyses? Mike Caulfield shares how LLMs can generate preliminary maps of evidence and disciplinary perspectives that students then critically navigate, verify, and synthesize. He provides concrete classroom activities showing how AI scaffolds rather than replaces the hard work of critical thinking.
What Counts? Rethinking Assessment Across Disciplines in the AI Age (Video 57 minutes) Northeastern faculty from computer science, philosophy, health sciences, and business grapple with discipline-specific challenges, from teaching CS students to use AI productively and responsibly to rethinking what counts as demonstrating knowledge in marketing and healthcare. This cross-disciplinary conversation reveals that AI policies need to be balanced with practical strategies for redesigning assessments that measure authentic learning.
AI for Qualitative Research: A Hands-On Guide This guidebook serves as a comprehensive resource for doctoral students, qualitative researchers, and academic faculty seeking to integrate AI tools responsibly into their research and inquiry practices while maintaining scholarly rigor and ethical standards. It was co-created by doctoral students in Northeastern University's Winter 2025 AI in Education course and Associate Dean Allison Ruda, and is offered as a free resource by CPS Learn Lab.
48 Hours Without A.I. (Gift Article) When we tell students "don't use AI," where are the limits to those expectations? Can they check email, search library databases, or take public transit without touching AI systems? A.J. Jacob's attempt to be AI-free for 48 hours maps the invisible algorithmic infrastructure students navigate daily, suggesting that bans miss the real pedagogical question: which AI interactions enhance learning and which short-circuit it?

Upcoming events, workshops, and programming on AI and learning
When: Wednesday 1/14 at 12:00 PM (EST)
Where: Virtual
Who: For Northeastern faculty and staff only.
When: Starts Thursday 1/15 at 1:00 PM (EST) and meets monthly
Where: Virtual
Who: For Northeastern faculty and staff only
Don’t forget to stay current with upcoming events from the units in the Division of Learning Strategy: The Center for Advancing Teaching and Learning through Research (CATLR) and Academic Technologies.


