Agentic AI is AI that doesn't wait for instructions. It plans, executes, and adjusts on its own. It breaks a goal into sub-tasks, calls other tools, checks its own output, and iterates until the job is done. If traditional AI is a calculator, agentic AI is an intern who reads the brief, does the research, writes the first draft, and emails it back before you've finished your coffee.
That should excite educators. It should also worry them.
What agentic AI means for classrooms
The shift from "AI as a tool you prompt" to "AI as an agent that acts" changes the dynamic in classrooms fundamentally. A student using ChatGPT to help reword a paragraph is directing the tool. A student using an agentic system that researches a topic, outlines an essay, drafts it, checks it against a rubric, and revises it automatically is watching the tool direct itself.
The student's role in that second scenario is closer to project manager than learner. They approve output. They don't produce it. And the cognitive work that used to happen between "I've been set this task" and "I'm handing this in" gets compressed into a few clicks.
This isn't hypothetical. Agentic workflows are already embedded in tools students can access. Auto-complete that anticipates the next sentence. Research assistants that summarise sources and generate citations. Writing tools that restructure arguments on the fly. Each one removes a small piece of friction. Together, they remove the productive struggle where learning actually happens.
When the algorithm handles the thinking, the student handles the clicking. That's not learning. That's supervising.
Cognitive offloading: the risk that scales silently
Cognitive offloading is what happens when a brain uses an external tool to reduce mental effort. Writing a shopping list. Using a calculator for long division. Perfectly reasonable in most contexts.
The problem is when the offloading becomes the default. When the brain stops doing the hard work because the tool makes it unnecessary. AI supercharges this tendency because its output looks polished, sounds confident, and arrives instantly. The student doesn't experience the gap between "I don't know this" and "now I understand." The gap gets skipped entirely.
Research consistently shows that students who go to AI first complete more tasks but score lower on unassisted tests. A 2024 study of secondary maths students found that the AI-first group scored approximately 17% lower on independent post-tests than the control group, despite completing significantly more practice questions. Speed improved. Mastery didn't.
The pattern scales silently because the visible output improves even as the invisible learning declines. The essay looks better. The student understands less. And because schools tend to grade the product rather than the process, nobody notices until exam season.
Skill erosion: what students lose when AI does the middle
Every complex skill has a messy middle. The part between receiving the task and producing the output. Planning, researching, evaluating sources, structuring an argument, drafting, revising, rethinking. That middle is where the learning lives.
Agentic AI is particularly good at handling the middle. It doesn't just answer questions. It plans the approach, executes the steps, and delivers a finished product. Which means the cognitive processes that build genuine capability, the planning, the monitoring, the self-correction, get outsourced to the machine.
I see this in my own students. They haven't lost the ability to plan, evaluate, or reflect. They skip those steps because the shortcut exists. And because the output still looks good (often better than before), the erosion is invisible. The grade goes up. The understanding doesn't follow.
This is not a technology problem. It's a design problem. The question isn't whether students should use AI. It's whether the way they use it preserves the cognitive work that creates durable learning.
Five practices to keep humans in the loop
These aren't theoretical principles. They're things I've tested in my own classroom and refined through working with schools. Each one targets a specific failure mode of AI-assisted learning.
1. Require thinking before the tool
The simplest and most effective intervention. Students attempt the task, or at least the planning stage, before they touch any AI tool. The initial struggle is where the learning happens. Not in the polished output, but in the effort to produce something imperfect.
This works because of a well-documented principle in cognitive science called desirable difficulty. Learning that feels hard produces stronger retention than learning that feels smooth. If AI removes the difficulty, it removes the learning.
In practice: set a visible checkpoint. "Draft your argument by hand first. Then use AI to challenge, extend, or refine it." The order matters more than the tool.
2. Make AI the opponent, not the oracle
Most students treat AI as a source of answers. Flip that. Make it a source of challenge.
Ask students to argue with the AI. Have it take the opposing position on their essay question. Get them to interrogate its reasoning, spot its gaps, and identify where it's oversimplifying. This transforms a passive tool into an active thinking partner.
I've had students use a chatbot as a debate opponent on ethical questions where there's no clean answer. The quality of their reasoning improved measurably, not because the AI taught them anything, but because the friction of disagreement forced them to sharpen their own position.
When a student has to defend their thinking against a fluent, confident challenge, they build exactly the kind of critical resilience that AI threatens to erode.
3. Build structured verification into every AI task
Verification is the skill that separates a student who directs AI from one who drifts with it. But verification doesn't happen naturally. The Anthropic AI Fluency Index found that in only 30% of conversations did users tell the AI how they wanted it to interact with them. Seven out of ten people accepted whatever the AI offered without questioning it. If trained adults default to passive acceptance, students need explicit structure to develop the habit of checking.
Three checkpoint questions work well here:
- Before: "What am I trying to learn, not just produce?"
- During: "Is this making me think, or making me skip thinking?"
- After: "Could I do this without the AI now?"
Print them. Put them on the task sheet. Make them visible enough that students stop treating verification as optional.
4. Assess the process, not just the product
If you only grade the output, you incentivise outsourcing the thinking. The student who uses AI to generate a polished essay and the student who wrestled with the question for an hour produce work that can look identical on the page. The learning underneath is completely different.
Shift some of the assessment weight towards process evidence:
- Drafting logs where students submit their planning notes, rough drafts, and revision history alongside the final piece
- Verbal defence where students explain their argument and reasoning in a short conversation after submission
- The seam question where students write one sentence: "The AI did ___ and I did ___"
The students who can draw that line clearly are developing metacognitive awareness. The students who can't are telling you something important about where the learning isn't happening.
5. Debrief AI use as a class routine
Crew Resource Management in aviation transformed cockpit safety not through more checklists but through structured debriefs. After every flight, the crew reviews what happened: what worked, what didn't, what they missed. The loop closes. Learning is built into the routine.
The classroom equivalent takes five minutes at the end of an AI-assisted task. Not "what did the AI get right?" but "where did we challenge it and where did we just accept it?"
Over time, students start noticing their own patterns of deference. They begin to see the authority gradient between themselves and the machine, not as an invisible force, but as something they can name and resist.
The debrief normalises scepticism. It makes questioning AI a shared expectation rather than an individual act of courage.
The question behind all five practices
Every one of these practices is asking the same question: who is doing the thinking?
Agentic AI is getting better at doing the thinking for students. That's not inherently bad. But if schools don't deliberately design tasks, routines, and assessment structures that keep the cognitive work with the student, the default will be delegation. Not because students are lazy. Because humans, all of us, naturally offload effort to the most efficient tool available. That's how brains work.
The goal isn't to restrict AI. It's to ensure that when students use it, they remain the ones directing the intelligence rather than being directed by it.
Use AI. Build it into your teaching. But design the task so the thinking still belongs to the student.
Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss human-centred AI implementation in your school.

