The Peter Principle Meets AI: When Smart Tools Mask Real Competence

Matthew Wemyss6 min read
The Peter Principle Meets AI: When Smart Tools Mask Real Competence

The Peter Principle is the idea that people get promoted until they reach a role they cannot actually do. It was a sharp observation when Laurence J. Peter described it in the 1960s. It feels even more relevant now, because generative AI is accelerating the process.

Before I became a teacher at 27, I worked a fair few jobs. I saw what happens when someone brilliant at their role gets promoted into something completely different. Suddenly they're managing people, making decisions, handling pressure they were never trained for. They struggle. Not because they're bad at their jobs, but because nobody checked whether they were ready for the next one.

Now, in the classroom, I'm seeing a new version of the same pattern. Students lean on AI to fill gaps in their understanding. Their work looks impressive until you ask a few simple questions. Then it becomes clear they don't really understand what they've handed in.

How AI upgrades the Peter Principle

The original Peter Principle was about human misjudgement. Someone good at Task A gets moved to Role B, which requires entirely different skills. They fail, not because they lack ability, but because we confused competence in one thing with readiness for another.

AI has introduced a new variable. People no longer need to be competent at Task A to look competent at Task A. Generative AI writes code, drafts reports, builds slide decks, summarises research, and replies to clients. It is fast. It makes people look capable. And when we evaluate people on their output without checking their understanding, we set them up to fail at the next level.

When people are assessed on what they produce rather than what they understand, the Peter Principle accelerates. People reach their level of incompetence faster, now with digital assistance.

The result is familiar. Someone's work looks sharp, so they get promoted. The new role requires strategy, judgement, or leadership. The tools cannot carry them in the same way. The gap shows.

AI gets smarter. The question is whether we keep up.

Modern AI tools are not just productivity boosters. They are knowledge agents. Used well, they are extraordinary. Used without reflection, they are dangerous.

The better these tools become, the easier it is to let them handle the hard thinking. And once you stop doing that thinking yourself, your skills begin to fade. You become a task manager rather than a thinker.

That works until you hit a situation where AI cannot help. Something ambiguous. Something messy. Something that requires empathy, judgement, or lived experience. Then you're on your own. And suddenly it's clear that the confidence came from the tool, not the person.

The hidden cost of polished output

There is a cultural dimension here too. In many workplaces, using AI is still quietly judged. People worry it makes them look lazy, so they use it without saying so. That makes it harder to see where human work ends and AI work begins.

Managers see polished deliverables and assume competence. They do not know how the result was produced. The person might have leaned heavily on AI and might not be able to reproduce the same quality independently.

It gets worse when managers themselves start using AI to write reviews, summarise feedback, or track performance. Now AI is assessing work that was created with the help of AI. The entire evaluation loop becomes automated. It looks clean. It is not connected to what someone truly understands.

When that person gets promoted, the gap shows up fast.

Why this is an education problem, not just a workplace one

If we are rebuilding curricula to prepare students for a world of AI tools, we need to talk seriously about how we verify understanding.

It is not enough to teach students about AI. It is not enough to teach tool fluency. Students still need to know things. Not just how to get AI to generate the right answer, but how to judge that answer, critique it, and go beyond it.

Otherwise, we produce a generation that creates polished content quickly but has no depth behind it. That is a brittle kind of knowledge. It falls apart the first time someone has to make a judgement call that does not come with a pre-trained dataset.

What schools should assess instead

We absolutely need to update what we teach. Data literacy, digital tools, ethics, collaboration with AI, new forms of creativity. All of this belongs in the curriculum.

But we must also keep the part where we make sure people actually know what they are talking about. That is not something you can outsource to a chatbot.

Here is where to start:

  • Judge the thinking, not just the outcome. Ask how someone approached the task. What options they considered. Why they made the choice they did.
  • Set challenges that cannot be delegated to AI. Give students complex, human problems. Let them show how they process ambiguity, not just tidy up a prompt.
  • Teach students to work with AI, not depend on it. Build skills in questioning, checking, and refining AI output. Train for oversight, not just efficiency.
  • Offer different routes to success. Not everyone wants to lead people. Recognise and reward those who excel in technical or specialist roles without forcing them into management pathways.
  • Normalise transparency. Make it safe to say "I used AI here." When that is normal, you can actually see where human understanding begins and ends.

Better assessment starts with better questions

Open-ended questions. Real-world challenges. Tasks where the student needs to think for themselves rather than repackage something that sounds good. And the requirement to defend their thinking, not just present it.

If we let AI take over both the learning and the testing, we lose the thread entirely.

The Peter Principle is alive and well. It just has artificial help now. Unless we build systems that value depth, judgement, and understanding, we will keep promoting people based on the illusion of competence rather than the real thing.

Two things you can do this term

1. Add a "defend your thinking" step to one AI-assisted task this week. After students submit work, give them five minutes to explain, without notes, the reasoning behind their key claims. The students who can do this comfortably have genuine understanding. The students who cannot are telling you something important.

2. Make AI use visible. Ask students to write one line at the top of any AI-assisted submission: "The AI contributed ___ and I contributed ___." You are not punishing AI use. You are making the boundary visible so you can assess what the student actually knows.

Embrace the tools. Use them well. Just don't confuse the shine of AI output with the substance of human knowledge.

Knowing still matters. It always has. It always will.


Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss AI literacy programmes for your school.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy