When Year 13 Students Start Using AI Independently: A Teacher's Observations

Matthew Wemyss6 min read
When Year 13 Students Start Using AI Independently: A Teacher's Observations

I noticed it about three weeks into the new academic year. My Year 13s were using AI, and not in the way I'd taught them.

I'd spent the previous year running structured sessions on prompt engineering, source verification, and responsible AI use. I thought I'd given them a solid foundation. I had. But what they were doing with it now had moved beyond anything I'd planned for. They weren't following my framework anymore. They were building their own.

That's either a success or a problem, depending on how you look at it.

What I actually observed

This isn't a research paper. It's a set of observations from one teacher watching one cohort over the first few weeks of a school year. Take it for what it is.

They're using AI as a research starting point, not an answer machine. The students who'd internalised last year's training weren't asking ChatGPT for essay answers. They were using it to map out a topic before diving into proper sources. One student described it as "getting the shape of the argument first," then going to journal articles and textbooks to fill it in. That's not what I taught her to do. It's better than what I taught her to do.

They're prompting differently. I'd taught them structured prompts with clear instructions. What I was seeing instead was something more conversational. Back-and-forth exchanges, five or six messages deep, where the student was steering the conversation towards increasingly specific territory. Less "write me a summary of X" and more "I think this, push back on it."

They're cross-referencing without being told to. Several students were running the same query through multiple AI tools and comparing outputs. Not because I'd set it as a task. Because they'd figured out that different models give different answers, and the gaps between those answers are where the interesting questions live. I hadn't taught them that. They'd worked it out.

They're also making new mistakes. I'll come back to this.

Why this matters for how we teach AI

When I run AI training sessions, I often hear the same concern from teachers: if we teach students to use AI, won't they just become dependent on it? It's a reasonable fear. And the honest answer is: some will.

But what I was seeing with this Year 13 cohort was the opposite of dependency. It looked more like fluency. The kind of confident, adaptive tool use that happens when someone has enough foundational knowledge to improvise.

Think about how students learn to write essays. You teach them a structure. Introduction, three paragraphs, conclusion. For a while, every essay looks identical. Then, gradually, the good ones start breaking the rules in productive ways. They merge paragraphs. They open with a question instead of a thesis statement. They've internalised the structure enough to move beyond it.

That's what I was seeing with AI. Students who'd learned the rules, practised them, and were now departing from them with purpose.

The goal of AI training isn't compliance with a framework. It's building the judgement to know when to leave the framework behind.

The new mistakes are different from the old ones

Here's the part that keeps me honest. The students weren't getting everything right. But the errors had shifted.

Last year's mistakes were basic: accepting AI output without checking, using generated text verbatim, failing to verify claims. Standard stuff.

This year's mistakes are more subtle:

  • Over-reliance on AI for structuring their thinking. Some students were letting the chatbot organise their arguments before they'd done any thinking of their own. The AI-generated structure was clean and logical, but it wasn't necessarily the right structure for what the student actually wanted to say. They were fitting their ideas into the AI's framework rather than building their own.
  • Confusing breadth for depth. AI is excellent at giving you a wide overview quickly. Some students were mistaking that overview for understanding. They could talk about a topic fluently without actually knowing it deeply. The surface was polished, but there wasn't much underneath.
  • Trusting conversational AI too much. The students who'd moved to a back-and-forth style were getting more nuanced responses, but they were also developing a kind of trust in the conversation itself. Because the AI was engaging thoughtfully with their ideas, they assumed its pushback was as rigorous as a teacher's. It isn't.

These are harder mistakes to spot. A teacher skimming the final product might not catch them. They only become visible if you watch the process.

Three things I'm changing in response

I'm adjusting my approach based on what I've seen. Nothing dramatic. Small shifts.

First, I'm spending more time on metacognition. Not "how to use AI" but "how to notice what AI is doing to your thinking." I want students to be able to articulate when the tool is helping them think and when it's thinking for them. That distinction matters, and it's harder to see than you'd expect.

Second, I'm building in more comparison tasks. If students are already cross-referencing AI outputs, I can formalise that. Give them two different AI-generated responses to the same question and ask them to evaluate which one is stronger and why. That forces critical engagement without banning the tool.

Third, I'm watching process more than product. The final essay or report tells me very little about how AI was used. I need to see the conversation logs, the draft stages, the moments where the student accepted or rejected AI suggestions. This is harder to assess, but it's where the real learning is.

What this means for your school

If you're teaching AI skills to students, the question isn't whether they'll move beyond what you've taught them. They will. The question is whether you've given them enough foundation to move in a productive direction.

My early observations suggest three things:

  1. Structured AI training works. The students who'd done it last year were visibly more capable than those who hadn't. The foundation matters.
  2. Independent AI use looks different from taught AI use. Don't panic when students stop following your framework. Watch what they're doing instead. It might be better.
  3. The mistakes evolve. As students get more sophisticated with AI, their errors get more sophisticated too. Your assessment methods need to keep pace.

This is one cohort, one school, one teacher's observations at the start of a school year. I'll revisit this as the year progresses. But if you're wondering whether it's worth investing time in structured AI training for your sixth form, my early answer is yes. Not because students will follow your programme forever, but because it gives them the judgement to build something better.


Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss preparing your sixth form for AI.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy