The Stress Test We Didn't Ask For

There is a growing chorus of alarm about the use of Artificial Intelligence in higher education. Professor Robert Newbher of Arizona State University has articulated this concern sharply: if students use AI to complete assignments and faculty use AI to design courses and grade work, what remains of the educational enterprise? How long until the degree becomes, in his words, "an absurdly overpriced piece of paper"? The worry is legitimate. We surely cannot afford a scenario where students pretend to learn and we pretend to teach them.

But here is what troubles me about this framing: it assumes we weren't already pretending.

The Scaffolding Falls Away

Consider what AI has accomplished in higher learning already. It has made a certain kind of cognitive labour cheap and accessible. Writing that sounds competent. Summaries that hit the right notes. Analyses that follow the expected structure. These were once costly to produce—they required time, effort, and at least a passing engagement with the material.

That cost was the work that had to be put in. It meant that even a disengaged student had to invest something to produce something. The friction was the pedagogy, or at least we told ourselves it was.

Now the friction is gone. And we are confronted with an uncomfortable question: if the only thing preventing students from bypassing the learning was the difficulty of bypassing it, what exactly were we teaching?

This is not a new pattern. There was a time when access to quality resources, libraries, and datasets was scarce. During this time, expert knowledge was locked behind institutional walls. Institutions benefited from this scarcity. That time has passed. They were gatekeepers, and gatekeeping has value—or at least, it has a price.

In my view, AI did not create a disruption. It revealed that some of what we called education was actually gatekeeping dressed in academic robes. The structural forces that kept the edifice standing have shifted. What remains is whatever was real.

The Crutch Problem

There is a common response to all of this: AI is a crutch. The metaphor is meant to be damning. Crutches are for the weak. Crutches create dependency.

But if AI is a crutch, we are admitting it has some use. And here we must be honest: in the intellectual world, we all need crutches sometimes. The difference between physical and intellectual crutches is instructive, though. A physical crutch helps you heal. It is designed to make itself unnecessary. You use it, you strengthen, you discard it.

AI does not work this way. The evidence suggests that sustained reliance on AI for cognitive tasks diminishes the underlying capability. The crutch does not strengthen us toward independence. It makes itself indispensable. In this context, AI works much like a pair of spectacles. It does take away from your ability to focus your eyes on matters without them.

And yet—there is a market for such crutches, and they are not going away. In fact, there is some evidence today (at least if social media is to be believed) that AI is enhancing the ability of doctors to detect patterns hidden to the human eye from MRI scans, and accountants detect patterns of financial behaviour that can spell disaster.

Therefore, the question is not whether students will use the crutches. The question is what happens to people who never learn to walk. That in my view is the bigger problem.

I feel that it is in this context that I can add a little bit of perspective. From a purely methodological standpoint, I am one of the last batches to get my PhD without ChatGPT around the corner. In that sense, I (and others like me) have experienced how to do things without AI, and also understand how useful AI can be in certain use cases. I still remember the thousands of iterations of writing wrong code only to hit a massive brick wall. I also understand that this makes me somewhat special. Because like Iron Man figured out the temperature problem that freezes his thrusters in the first movie, I too have figured out plenty of things. Things that, at least in my case, had utility. Because I can find when a student is going wrong, and use that as an opportunity to teach (or at least I think I can, and do).

Smart Responses, Wrong Game

Let's admit this. Faculty are not passive in all this. The responses have been creative. Trap questions designed to make the AI hallucinate. Output-oriented assignments where students must generate something in real-time. Oral examinations. Process portfolios. Snapshots of the entire AI conversation.

These are clever. They are also an admission that we have entered an arms race. Faculty design countermeasures; students find workarounds. The game becomes adversarial, which is perhaps the clearest sign that something has gone wrong. Education should not be a contest between those who teach and those who learn.

There is a deeper question lurking here: is there actually a problem, or is there merely a bias that expects a problem? We assumed exams measured learning. We assumed lectures transmitted knowledge. We assumed assignments built capability. We assumed students read the pre-reads. The AI stress test is showing us that our assumptions were wrong (at least in parts). And when you stress-test a structure, you learn what it was actually made of.

What the Edifice Was Made Of

If parts of our educational structure are crumbling, what is left standing?

Some things, it turns out.

Relationships. The slow work of community participation and interaction. Tacit knowledge transfer—the kind that happens not through readings and lectures but through proximity to practice. Socialisation into a professional identity. The formation of judgment, taste, and character through exposure to people who have those things.

None of these can be provided by AI. None of them can be assessed through assignments. And none of them have been the focus of the mad rush to "add AI" to everything—the MBA in AI, the MBA with AI, the AI for Marketing, the AI for Finance. We are busy bolting new capabilities onto the facade while ignoring what was holding the building up.

The Character Question

Let me sit with something uncomfortable for a moment.

We are now producing students who are more willing to stamp their name on an AI-generated concoction. Not as collaborators with a tool, but as authors of work they did not substantively produce. They will sign their names to analyses they did not think through, to arguments they did not construct, to prose they did not write.

This is not primarily a pedagogical problem. It is a character problem. And character is formed—or deformed—by practice. Every time a student claims credit for cognitive work they outsourced, they are practising something. They are becoming someone (someone they are not).

Perhaps the degree was always partly theatre. But theatre, done well, can transform the actor. The worry is not that students are using scripts. The worry is that they no longer know they are performing. (This is where my batch and I come in—at least we had the chance to learn the difference.)

So Here We Are, at 2026

I have spent much of this piece diagnosing. It is easier to diagnose than to prescribe. But a new year demands at least an attempt at direction.

Here is what I want to try.

First, I want to stop playing the detection game. The arms race is unwinnable, and winning it was never the point. Instead, I want to focus on what cannot be outsourced. This year, I will focus on the conversation in my office, the question that arises mid-discussion, the moment when a student's thinking visibly shifts. These were always where the real teaching happened.

Second, I want to teach students how to use crutches well—and when to put them down. AI is not going away. Pretending otherwise helps no one (particularly not me). But there is a difference between using a tool and being used by it. I want to help students see where the line is, not by prohibition, but by showing them what they lose when they cross it. That means being honest about my own use of these tools, and about the times I have chosen friction over convenience.

Third, I want to raise the stakes for the things that matter. If character is formed by practice, then I need to create spaces where students practise being the kind of professionals they want to become—not just producing deliverables, but making judgments, defending positions, and yes, sometimes failing in ways that teach them something.

None of this is a solution. The edifice is still under stress, and I do not know what shape it will take when the shaking stops. But I know that the parts worth saving—relationships, tacit knowledge, the slow formation of professional identity—were never in the assignments to begin with. They were in the spaces between.

Welcome to 2026. Let's see what we can build there.


I would love to hear what you think. If you are wrestling with similar questions—as a teacher, a student, or someone watching from the outside—drop me a note. These are not problems any of us will solve alone.