Every September the same headlines return. Students are cheating with AI. ChatGPT is destroying academic integrity. The education system is in freefall.
Then someone quotes one student saying "I used ChatGPT to cheat on nearly every assignment," and suddenly the entire profession acts as though we've discovered fire.
But here's the thing the headlines miss: AI didn't create a cheating epidemic. The data says so. And if we keep blaming the tool, we'll keep ignoring the system that makes cheating feel necessary.
Cheating rates haven't actually changed
Victor R. Lee, an education researcher at Stanford, addressed this directly in a 2025 piece for Vox. His conclusion was clear. While the method of cheating has shifted, the rate hasn't budged much.
This shouldn't surprise anyone. Donald McCabe's landmark studies from the 1990s and 2000s found massive self-reported cheating rates across higher education, with up to 96 percent of business majors admitting to some form of academic dishonesty. Multiple surveys have consistently placed high school cheating rates between 60 and 80 percent, well before generative AI entered the picture.
In Lee's more recent high school studies, the percentage of students using AI to complete entire assignments sat at around 11 percent in 2023, climbing to approximately 15 percent by 2025. That's right in line with historical patterns. AI didn't cause a spike. It became another option in a well-stocked toolbox that already included Course Hero, Chegg, and the kid who sits near the window.
AI didn't invent cheating. It modernised it.
We're asking the wrong question
What's changed is the conversation. We've stopped asking why students cheat and become fixated on how they do it. That shift matters, because it leads us toward detection and punishment rather than prevention and design.
Before I go further, I should say I don't even love the word "cheating" in this context. It's too loaded. Too black and white for something that is clearly a murky grey. Is it cheating when a student pastes their introduction into ChatGPT for feedback? What about using it to explain a concept they didn't grasp in class? Where exactly is the line?
From my own classroom experience, this idea that students are using AI to cheat on everything just doesn't match reality. I've got Year 13 students who use chatbots I've built myself. We've talked openly about what those tools are for and what they're not for. One student tells me he uses ChatGPT at home, not to generate code or copy answers, but to explain things he doesn't understand. To help him make sense of something he missed the first time round.
I don't think that's cheating. Not even close. That's the same as searching StackOverflow or Googling a concept until it clicks. That's not dodging the work. That's doing the work in a way that fits how students learn now.
What cheating actually looks like at 11pm
Let's change the lens and talk about what it feels like to be the student in that moment.
It's 11.37pm. Maya's laptop is overheating again. Her eyes are burning. She hasn't had dinner. She hasn't had a break. She has a biology assignment due at midnight, an unfinished history essay due tomorrow, and a maths test she hasn't even looked at yet. Her phone buzzes, but she ignores it. No one is awake to help her.
Her cursor blinks on a blank Google Doc. She starts typing, stares, deletes, retypes. Still not good enough. She's been told her writing is weak. Too simple. Too casual. Once, her teacher implied it sounded like ChatGPT wrote it, even though it hadn't. That accusation stayed with her.
So tonight, just to be safe, she pastes her intro into ChatGPT. Just for feedback, she tells herself. Maybe a stronger opener. Something more formal. Something that sounds like the kind of student who gets As. Thirty seconds later, she's pasting in the rest. Not because she doesn't care. Because she cares too much and has too little time.
She edits, tweaks the phrasing, rewords sentences. She hits submit at 11.58pm.
Did she cheat? Maybe. But that's not the question that matters.
Cheating is a systemic symptom, not a character flaw
Research consistently shows that cheating is most strongly linked to pressure, fear of failure, and a lack of intrinsic motivation. A 2021 meta-analysis by Krou, Fong, and Hoff confirmed that extrinsic motivation (doing school for rewards and punishments) correlates positively with cheating, while intrinsic motivation and academic self-efficacy correlate negatively.
Students aren't lazy or deceitful. They're overwhelmed, under-supported, and making calculated choices in high-pressure environments. The data from the Josephson Institute, McCabe's extensive survey work, and a 2025 Hanover Research report all point in the same direction:
- 59 percent of high schoolers admitted to cheating on a test in the last year
- 95 percent of students in one study admitted to cheating in some form
- 40 percent of students have used AI without permission on assignments
Most students don't cheat because they want to outsmart the system. They cheat because the system is exhausting. They're tired, overwhelmed, and desperate to avoid failure in environments that reward performance over learning.
That pressure starts early and builds fast. Parents want results. Teachers want order. Students want to survive. It becomes a game of managing deadlines and dodging burnout. In that game, shortcuts don't just feel tempting. They feel necessary.
The double standard students can see clearly
One of the hardest things for students to reconcile is the hypocrisy. They see their teachers using AI. Their parents use it for work. Sometimes it's actively encouraged in class. But then they're punished for using it the wrong way, even when nobody has clearly explained what the "right" way is.
A Stanford study found only 10 percent of high school teachers had clear AI policies. So students are left to figure it out for themselves. Is it acceptable to use AI to check spelling? What about outlining an essay? Or translating a passage?
The rules are muddy. The expectations are unclear. And in that fog, students fall back on instinct. Or worse, they guess wrong and get punished for it.
What schools should actually do about it
Blaming AI is easy. Fixing the deeper problems is harder, but the research points to clear starting points.
Define acceptable use, explicitly. Students need to know exactly what is and isn't permitted. Not a vague statement in a policy document, but a clear, practical guide they can apply to any assignment. "You may use AI to check your understanding of a concept. You may not use it to generate text you submit as your own." Specific, testable, fair.
Redesign assessments to reward thinking, not output. If an assignment can be completed entirely by ChatGPT, that's a design problem, not a student problem. Process-based assessments, verbal defences, reflective journals, and in-class demonstrations are all harder to outsource and more revealing of genuine learning.
Reduce the pressure that drives cheating in the first place. This means looking honestly at homework volume, deadline clustering, and the relentless emphasis on grades over growth. Maya didn't turn to ChatGPT because she's dishonest. She turned to it because the system gave her too much to do and too little support to do it.
Talk about AI use openly. When students feel they can ask questions about AI without fear of punishment, they're far more likely to use it responsibly. Silence and ambiguity create the conditions for misuse.
The mirror we don't want to look into
Punishing students for using tools we've failed to teach them how to use responsibly doesn't solve anything. Neither does pretending AI will go away.
Maya is not the exception. She's the norm. And we can't keep ignoring that.
Cheating is the symptom. The system is the sickness. AI is just the latest tool in an already broken loop. If we want different outcomes, we need to stop fixating on detection and start redesigning the conditions that make cheating feel like the only option.
References
- Lee, V.R. (2025). "I Study AI Cheating. Here's What the Data Actually Says." Vox.
- McCabe, D.L., Butterfield, K.D. and Trevino, L.K. (2012). Cheating in College: Why Students Do It and What Educators Can Do About It. Johns Hopkins University Press.
- Krou, M.R., Fong, C.J. and Hoff, M.A. (2021). Achievement Motivation and Academic Dishonesty: A Meta-Analytic Investigation. Educational Psychology Review, 33, pp.427-458.
- Hanover Research (2025). Academic Integrity in the Age of AI.
- Josephson Institute (2012). Report Card on the Ethics of American Youth.
Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss assessment design for the AI era.


