The AI Reckoning; Reckoning With AI
In November 2022, ChatGPT arrived. The panic among university faculty was instant. Within days, The Atlantic declared that “The College Essay Is Dead.” We witnessed the near collapse of one of higher education’s most basic forms of assessment, a reality that has become increasingly acknowledged and accepted. Essays were only the beginning. Coding, design, and even routine homework have each joined the “is dead” chorus.
We have learned that the reality is far more complex. These forms of assessment are not gone but rather are transforming. The obituary captured the anxiety but missed the opportunity. The question now is: if assessment as we knew it is dead, what comes next?
This is one of many critical questions we will continue to return to in this newsletter. Every month, we will share perspectives from members of our community who are directly grappling with the adoption of generative AI in higher education. Our goal is to provide a space for thoughtful and critical exploration of this technology (and whatever comes next) where it intersects with and comes into tension with learning, particularly at Northeastern. We hope it’s a place where readers can be provoked by and reflect on the changes AI is bringing to pedagogy and assessment.
Given the abundance of conversations, writings, videos, and podcasts out there, do we really need another newsletter on this topic? In April 2025, Northeastern gave enterprise access to Anthropic’s Claude to all faculty, staff, and students. That shift moved us from individual experimentation to institutional infrastructure and an opportunity to learn at scale about a technology that appears to have equal chances of either positively or negatively disrupting education more than any previous technology (and possibly both). We want to bring conversations about these impacts to this newsletter to share what we and our colleagues are trying, what we’re struggling with, and what we’re learning.
Everybody’s Doing It, But Should We?
In general, AI usage by faculty and students has increased in the past year. While consistent statistics are hard to obtain, it appears that student adoption has reached near-saturation, with 85% of US college students using GenAI for coursework in the past year (Inside Higher Ed), while faculty adoption may register somewhere between 49% (Cengage) to 57% (Anthropic). There are increasingly fewer questions about whether AI should be part of higher education and more focused on where it does and does not belong.
The risks are not only cheating and plagiarism. We know there are real concerns about how over-reliance on these tools (often referred to as "cognitive offloading") may undermine students’ ability to build their knowledge and skills. Such impacts could harm their preparation for careers at a time when the job market feels under increasing pressure. What are the second-order effects of sycophantic AI systems that reinforce bias or confirmation bubbles? Left unexamined, AI could just as easily diminish human learning as enhance it.
On the other hand, many students remain underserved by traditional educational models and don’t receive the support they need, particularly in large courses. The promise of AI to provide personalized practice and formative feedback at scale could improve outcomes for many of these students.
Assessment Is Dead; Long Live Assessment
Socrates said that “The unexamined life is not worth living”; similarly, the unexamined assessment is not worth giving. We must recognize that many traditional forms of assessment were mere proxies for authentic assessments if AI can easily pass them.
What’s starting to emerge here at Northeastern and elsewhere is a collection of practices and strategies around AI in pedagogy and assessment that are not entirely new and have often been understood as just thoughtful pedagogical approaches, well before GenAI became ubiquitous.
Some of the most promising practices include:
Simulations and Role-Plays. Nursing, business, and policy programs are using AI-driven scenarios that require real-time decision-making.
Experiential Learning. AI can replicate realistic workplace tasks and projects before students enter co-ops or internships, extending experiential learning into the classroom.
Iterative Drafting and Feedback. Students submit multiple AI-assisted drafts, making the thinking process visible and focusing less on the end product.
Collaborative Problem-Solving. In flipped classrooms, AI covers prep work while class time focuses on teamwork, judgment, and formative feedback.
Authentic Multimodal Outputs. Podcasts, design briefs, data visualizations, or policy memos are formats that demand human interpretation, not machine substitution.
These approaches lean into critical skills AI cannot replace, and which employers need more than ever from employees: human collaboration, ethical judgment, and higher-order problem-solving. As pedagogical strategies, they engage students in active learning. They can create both formative assessment opportunities and prepare students for summative assessments that evaluate the skills they have gained through these activities, rather than focusing on work products that AI can easily generate.
The Only Way Through Is Through
Three years in, it is clear that AI is not a passing fad to be waited out. Students are already using it, and in some cases, are way ahead in terms of their sophistication. Faculty are already grappling with how to rethink what and how they teach, and institutions are transitioning from scattered guidelines to more systemic strategies. The question is no longer whether AI belongs in higher education, but how we shape its role. What we decide will not just impact our institution and our students, but also contribute to broader norms.
At Northeastern, this work means more than handing out licenses to AI tools. It means building capacity through developing AI readiness frameworks, pedagogy and assessment workshops, and curriculum redesign initiatives. It also means creating space, through efforts like this newsletter, to share what we try, where we stumble, and what’s working.
“Assessment is dead” was the provocation. The task now is to ensure the “long live” part: designing assessments that measure what and how students actually learn, that treat AI as a tool rather than a shortcut, and that push learners toward judgment, creativity, and collaboration.
The only way forward is to re-examine assessment as rigorously as we must re-examine the pedagogy that underlies it. That requires experimentation, reflection, and a willingness to adapt. If we do that, AI won’t mark the end of assessment, but its renewal. After all, Socrates also warned us that writing would impair our capacity to learn and hold knowledge in our heads. Yet we have discovered that writing is a tremendous act of learning. We have the same opportunity to do so with AI, and we look forward to pursuing it together.

In this space, we share tips from Northeastern faculty members for integrating AI into teaching and learning.
Featured Faculty: DJ Corey - Senior Lecturer & EMT Program Director -Bouvé College of Health Sciences
What he’s doing: DJ has students evaluate AI-generated medical advice using a color-coding system in his EMT training courses. After picking an illness or injury, students then independently draft a realistic case study of a patient experiencing that condition in an emergency medical situation. They then prompt any AI tool to manage the case as a Massachusetts EMT and systematically analyze the AI's response using four colors: green for accurate information, yellow for incomplete guidance, pink for dangerous recommendations, and blue for medically sound advice outside of an EMT’s scope. Students are required to explain their color-coding as part of the assignment and then give and defend a grade of the AI tool’s ability to respond to the scenario correctly.
This assignment transfers readily to other contexts. The use scenario above is medical, but we can also imagine students color-coding AI-generated legal briefs, engineering solutions, or historical analyses for accuracy, completeness, and domain-appropriate application. |
What's working: The assignment achieves dual learning objectives—supporting clinical decision-making while developing critical AI literacy. DJ describes this as thinking “backward” from the symptoms to a scenario and physiology that would cause it. Students then must think “forward” to evaluating the treatment recommendations that AI generates in response to that scenario.
The color-coding framework forces engagement with every line of AI output, preventing passive acceptance of generated content. Students consistently report this as their first assignment requiring them to use AI while simultaneously revealing its limitations. The structured critique process has improved overall student performance, with learners increasingly using AI as a study tool rather than a replacement for thinking. The requirement to grade the AI promotes metacognitive reflection about appropriate AI use in healthcare contexts.
What's next: This assignment evolved from a one-off experiment into a three-part series covering medical emergencies, behavioral crises, and trauma cases. Spring 2026 will see the integration of virtual reality simulations to expand case-based learning opportunities. VR will enable scenarios difficult to replicate with current resources, allowing students to practice decision-making in more diverse emergency situations. The combination of AI evaluation skills and immersive VR experiences aims to develop healthcare providers who can critically assess technological tools while maintaining focus on patient safety and scope-appropriate care.

Our picks of recent articles, blogs, podcasts, and other media to provoke and provide insight on opportunities and challenges with AI in teaching and learning.
Talking with Your Students About AI: A recently published CATLR Tip provides strategies for engaging with students in conversation about AI in the classroom.
AI and Student Agency: Annette Vee reflects on how guiding students’ relationships with AI can strengthen moral reasoning and autonomy in learning.
The Un-Cheatable Assignment: Adam Chodosh shows how making the process visible creates assignments AI can't complete and develops exactly the skills students need for an AI-augmented workplace.
AI tools promise efficiency at work, but they can erode trust, creativity and agency: Jordan Loewen-Colón and Mel Sellick argue that the future workplace success requires questioning AI outputs, not just speed. What does this mean for universities? Faculty should reimagine assessments to strengthen independent reasoning and protect cognitive development.

Upcoming events, workshops, and programming on AI and learning
When: Thursday, November 13, 2025, 12pm-1pm (EST)
Where: Virtual
Who: For Northeastern faculty and staff only.
When: Tuesday, November 18, 2025, 11:30am to 6:30pm (PST).
Where: In-person (Lisser Hall Kapiolani Road Oakland, CA, Registration ) & Virtual for Northeastern University Community Members (Livestream registration)
Who: Registered attendees and Northeastern University Community members
When: Wednesday, November 19, 2025, 12pm-1pm (EST).
Where: Virtual
Who: For Northeastern faculty and staff only.
Don’t forget to stay current with upcoming events from the units in the Division of Learning Strategy: The Center for Advancing Teaching and Learning through Research (CATLR) and Academic Technologies.


