For decades, educators and technologists have considered how personalized learning can extend the benefits of one-to-one tutoring across entire classrooms. When teaching adapts to a student’s current knowledge and skills, as well as to their specific context and motivations, we expect better outcomes. While many have hoped that digital technologies would solve this challenge, none have yet delivered on that promise.
Recently, generative artificial intelligence has revitalized these hopes. Acting as tutors, large language models can converse naturally, address misconceptions, and adapt examples to suit learners' interests. Yet even as AI capabilities improve, the biggest limitation of the dominant approach to personalized learning isn't technical. It's the assumption that “personalization” means customizing a predetermined curriculum for the student. The real opportunity isn't AI personalizing learning for students. It's students using AI to personalize learning for themselves.
Vibe Coding
One of the most intriguing concepts to emerge from the rise of generative AI is “vibe coding”: creating software by conversing in everyday language with an LLM about your ideas, having it translate them into code, and then iterating through further dialogue.
There’s a new kind of coding I call “vibe coding,” where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g., Cursor Composer with Sonnet) are getting so good.
The “vibes” in vibe coding derive from the fact that you don’t have to focus too much on the details of the execution and can go with the flow of the creative process. Iterating verbally with the LLM to make changes and fixes is enough to produce a solution that works well enough. For a trained software developer, vibe coding is about creating as the fancy strikes you, quickly testing ideas that previously might have taken days or weeks to execute. With vibe coding, you can create a prototype of an application in minutes, and even non-coders are taking advantage.
Vibe coding is an application of “agentic AI.” This involves the LLM reasoning about the request you have made during task execution, planning and iterating as it goes. “Agency” is a feature of the AI system that allows it to “decide” how to solve the problem you’ve given it and revise its plans based on how it evaluates the initial results it gets. AI agency raises a host of concerns about the safety and alignment of these systems. One of the most troubling and fascinating questions is how they will influence our agency as humans, who are supposed to be guiding the technology. The more independent AI agents become, the less humans will be involved in decision-making and task execution.
This is especially concerning in the domain of learning, where growth depends on doing the work oneself. A critical requirement for the use of AI in education is that it not undermine a student’s ability to acquire foundational knowledge and skills. But AI can open up educational opportunities that don’t involve offloading the learning process. Instead, building on the concept of vibe coding, students can use AI tools to create on-demand learning experiences that meet their immediate needs and interests while preserving the hard work required to learn. Just as engineers can follow the vibes in building software, students can do so in their learning, using AI to pursue interests in topics not covered in their courses.
But this isn’t just about exploring curiosity. For the real power of self-directed learning to be realized, we need a change in mindset about a student's role in their own education.
Agentic Learning
Every child is born with innate curiosity. In early childhood, learning is an organic process in which discovery guides development through exploration of the world and stubborn efforts to attain new skills. But when children enter school, they are soon trained to focus on what will be on the test and how to get an A, all of which are spelled out by the teacher, the syllabus, and the curriculum. As Dewey and many others have argued, this approach to education flattens creativity and undermines agency.
The antidote is teaching students to make their own decisions about what and how they learn, restoring the autodidactic impulse from childhood. To rediscover agentic learning, students need to determine why they are pursuing education in the first place. Given this opportunity and the intrinsic motivation it can bring, students can personalize their learning in ways that matter to them while also maintaining intellectual rigor.
Although many faculty members may agree with this sentiment, supporting agentic learning can be difficult when students come from a K-12 educational system that emphasizes conformity to standardized achievement measures and a collegiate system that often requires instructors to follow syllabi they didn’t design. Generative AI could offer a way forward: combining the technology’s flexibility with pedagogy that allows students to drive the learning process.
Agents in the Classroom
College degrees are built upon a sequence of courses designed to ensure that a student has mastered a specific body of content knowledge and disciplinary skills, codified in course and program learning outcomes. This system especially struggles to adapt to a world where technology is continually disrupting the capabilities needed for professional success. Nevertheless, there are many paths to achieving a program's learning outcomes. Placing the task of charting the path in each student's hands is the strongest form of personalization: taking ownership of the learning process.
In this paradigm, a substantial component of every course requires the student to determine what and how they will learn, with appropriate feedback and guidance from the instructor. For example, a student can explore applied topics related to a course's core concepts, conduct relevant literature reviews, and plan a sequence of activities aligned with their interests to help them master foundational knowledge. This can work across disciplinary domains and provides the benefits of personalization with a heavy dose of planning and decision-making. AI systems are a significant asset here, and part of the process will be thinking through when and how to use them. This doesn’t mean every course is an independent study, but rather that self-directed projects and assignments feature much more regularly than they do now.
Another critical role for the instructor is supporting the social learning experience. When each student is on a separate learning journey, creating opportunities for collaboration and exposure to alternative perspectives helps develop social skills that matter as much as, or more than, technical proficiency.

Source: How to AI-proof your job. The Financial Times (Jan 8, 2026).
To borrow another analogy from software engineering, class time can become a hackathon of sorts, where each learner brings their own customized curriculum, interests, and creativity to learn together with their peers, taking on challenges that go beyond their individual capabilities, even when AI-assisted. Engaging with peers in this way can also provide social proof that self-directed learning isn’t just for exceptional students, but for every student.
Vibe Teaching
Responsive teaching requires comfort with uncertain outcomes. You can’t know exactly how a class session will go when you leave it to the students to bring their own interests and ideas, and even to lead the class themselves while you coach and support from the sidelines. This is where vibes enter teaching: when we give up the comfort of a polished lesson plan and rely instead on our expertise to adapt to the teaching opportunities that arise in a student-centered learning process.
To make this shift, instructors cannot be expected to follow standardized syllabi in every detail. This doesn’t mean the syllabus disappears entirely, but rather that it becomes more of a directional guide than a rigid schedule of topics and assignments. While meeting program learning outcomes can ensure that a student is proficient in their discipline, how they are achieved is something we can let the student find, with the instructor guiding and validating. This extends all the way to the assessment of learning: students can set criteria and self-assess their work as part of the learning process before the instructor provides feedback.
The rise of artificial intelligence has presented educators with new challenges but also an opportunity to achieve something that has nothing to do with technology. When we have the chance, we can experiment with giving students responsibility for their learning rather than deciding on the path for them. Because when we provide students with the tools and mindset to guide their own learning, we prepare them for a world in which agency, adaptability, and resilience will ultimately serve them better than any content we can put on the syllabus.
One Thing to Try
In this space, we share tips from Northeastern faculty members for integrating AI into teaching and learning.
Featured Faculty: Jose Luis García del Castillo y López
Teaching Professor of Computational Design
College of Arts, Media and Design
Why try it? Same instructor, same tool, opposite policies. The distinction isn't comfort level: it's whether AI bypasses the skill being taught.
What he's doing: Jose Luis teaches two technology-focused courses with opposite AI policies because his learning outcomes help guide use. The contrast illustrates how the same tool can be appropriate or inappropriate depending on what students need to learn.
In the undergraduate course Prototyping with Code, students learn to code from scratch and then apply those skills to creative visual projects. Developing foundational coding ability is one of the core learning outcomes and vibe coding is not appropriate here. Jose Luis compares it to teaching math: we still teach it by hand even though calculators exist because the thinking and problem-solving processes have value beyond the calculations themselves. Students may use AI as a coding coach to do things like asking it to explain concepts, debug broken code, generate practice problems, or help with syntax errors at midnight when Jose Luis is not available. Essentially, any AI use that helps students learn is fair game; anything that replaces their learning is not. For Jose Luis, code has personal style, like handwriting, so when students submit work in week two using advanced techniques never covered in class, the source is evident. Jose Luis tests each assignment himself with Claude to see what AI-generated submissions might look like, but he is still strategizing around how to best discuss overuse of AI with his students.
In contrast, the graduate seminar Human-Centered AI is a non-technical course which prepares future leaders to make decisions about AI implementation and management in organizations. Here, vibe coding is welcomed because the curriculum doesn't include learning to code. Students use AI to generate working prototypes. It's simply another tool, and a valuable professional skill for students who will lead AI initiatives rather than build them. In addition, every assignment includes a comparison component: students first complete work independently, then use Claude for the same task, and finally analyze how their thinking differs from the AI's output. This structured comparison helps future leaders understand what AI can and cannot do, which is essential knowledge for the decisions they'll face in industry.

A self-portrait by Sara Dassanayake, a student in Prototyping with Code who used Figma to map coordinates before translating them to p5.js, demonstrating a bridge between visual planning and code. Reproduced with permission.
This framework transfers to any discipline. A writing instructor might limit AI drafting in a composition course where students learn to generate ideas, but welcome it in an advanced editing course focused on revision skills. A science instructor might limit AI-generated lab reports in introductory courses where students learn to interpret data and draw conclusions, but welcome it in advanced research methods where synthesizing literature is the focus. Law faculty might restrict AI in legal writing courses but embrace it in clinical simulations where students practice client counseling. |
For Jose Luis, the question is: What are students supposed to learn? If AI use bypasses the skill or thinking process, try to develop and set boundaries. If it supports or extends learning without replacing it, embrace it.
What's next: Jose Luis plans to continue refining his syllabus guidelines, which he calls the "Generative AI Constitution" for each course. There, he explicitly names productive and unproductive AI uses based on that course's specific learning outcomes. He's also exploring whether AI can offer useful critique of student-created visual work, though he remains uncertain whether AI judgment on design and art is trustworthy enough to assist students in the immediate absence of human instructors' feedback.
Worth Your Time
Our picks of recent articles, blogs, podcasts, and other media to provoke and provide insight on opportunities and challenges with AI in teaching and learning.
90% Of Faculty Say AI Is Weakening Student Learning: How Higher Ed Can Reverse It by Dr. Aviva Legatt
Faculty consensus on harm is notable, but what if agreement on the problem doesn't clarify the problem? That students rely on tools, or that our assessments can't distinguish competence from completion? If learning is weakening, we need to know whether the foundation itself was sound or whether AI is simply making invisible cracks suddenly visible.
On LLMs as a Medium for Thought by Sam Barrett, Ph.D.
Training students to compare outputs from Claude, ChatGPT, and Gemini transforms AI from answering machine to evidence set. The pedagogical goal becomes building a habit of treating algorithmic consensus as provisional, rather than authoritative. Students learn to question not just AI outputs, but the frame each model imposes on a problem.
Two kinds of AI users are emerging. The gap between them is astonishing by Martin Alderson
What happens when meaningful use of AI is more about the freedom to iterate than adoption? Alderson's account suggests infrastructure constraints rather than sophisticated usage, increasingly determines who can personalize tools and workflows. The widening gap raises questions about whether institutional structure undermines the exploratory mindset organizations claim to need, even as they design it out.
An Agentic AI Primer Post by Michael Webb
Agentic AI systems plan, act, and iterate toward goals without waiting for human direction at each step. This primer maps how they work and why their capabilities and autonomy challenge assumptions about authorship, assistance, and where learning actually happens.
AI Happenings
Upcoming events, workshops, and programming on AI and learning
When: Wednesday, February 11th 2:00 pm EST – 2:45 pm EST
Where: Virtual
Who: For all CUMU-affiliated universities (including Northeastern) faculty, staff, and students can join
For questions, contact: [email protected]
When: Wednesday, February 25th at 12:00 pm-1:00pm EST
Where: Virtual
Who: For Northeastern faculty and staff only.
For questions, contact: [email protected]
When: Tuesday, March 3 from 12pm - 1pm EST
Where: Virtual
Who: For Northeastern faculty and staff only.
For questions, contact: [email protected] See also, the companion guide.
When: Wednesday, March 11 from 12pm - 1pm EST
Where: Virtual
Who: For Northeastern faculty and staff only.
For questions, contact: [email protected]
Don’t forget to stay current with upcoming events from the units in the Division of Learning Strategy: The Center for Advancing Teaching and Learning through Research (CATLR) and Academic Technologies.

