Why Students Trust AI Less But Rely On It More

Matthew Wemyss10 min read
Why Students Trust AI Less But Rely On It More

Last year, a student came to me with two printed pages of revision topics for his A Level. He had highlighted every topic in one of two colours. Green meant AI had explained it. Yellow meant he was saving it for me.

Most of the list was green.

I was not offended. I was curious. He had made a deliberate decision about where to send each question. He had looked at each topic and asked himself: can I use AI for this, or do I need a person? That is, by any measure, a form of delegation. And here is the part that complicates the story most people want to tell about AI in schools: it worked. He passed. The green topics stuck. The AI did the job he needed it to.

So what exactly is the problem?

What the grade did not show

The problem is not that AI failed him. The problem is that success made something invisible.

He did not need me less. He needed me differently. The yellow topics were the ones where he needed something the machine could not give him. Not information. Not explanation. Something closer to dialogue, to challenge, to the experience of being known well enough that the question could be shaped around him rather than around a generic prompt.

The green topics worked. The yellow topics worked. The grade made them equal. And that equality hid the only question that matters: why did he choose a machine that doesn't know him over a teacher who does?

I have been thinking about this for months. Not as a single moment, but as a pattern. I am the "AI guy" in my school. My students know I am not against it. I have built chatbots for them, run workshops on prompting, integrated AI tools into lessons. They do not hide their use from me. And that means I see things other teachers might not.

They laugh at the five-legged animals AI generates. Then they ask if AI can write their presentation.

They know it fails. They have seen it generate images with extra fingers, citations that do not exist, arguments that collapse under the lightest pressure. They scoff at it. And then, ten minutes later, they use it for the work that counts. Not because they trust it. Because it is there, it is fast, and it does not judge them.

Trust is not the same as reliance

This is where the research gets interesting, and where most school AI policies get it wrong.

The standard assumption, inherited from engineering psychology, is that people rely on what they trust. If you trust the system, you use it. If you do not, you don't. Decades of research into automation in aviation, manufacturing, and healthcare were built on this premise. Parasuraman and Riley mapped it in 1997. Lee and See refined it in 2004. Hoff and Bashir expanded it in 2015. The models are robust, well-tested, and largely correct in professional contexts.

But classrooms are not cockpits. And students are not pilots.

What I see in my classroom, and what recent research is starting to confirm, is that students frequently exhibit low trust and high reliance at the same time. They do not believe the AI is reliable. They use it anyway. That gap between what they know and what they do is not ignorance. It is strategy.

Four forces stronger than trust

If the goal is students who can direct their own thinking, we need to understand what stops them. It is not a lack of knowledge about AI. It is four forces that make self-direction feel harder than surrender.

Social safety. Asking a teacher means admitting you do not understand. Asking an AI means no one sees you struggle. The machine does not judge, does not remember, does not tell your parents. For a teenager, that is not a small thing. It is everything. My students do not go to AI because they think it is better than me. They go because it is safer than me.

Cognitive load. Trust is not driving the decision. Effort is. The student is not thinking "this AI is reliable." They are thinking "this AI is available and I am tired." Reliance becomes a resource management strategy, not a trust judgement. When the assignment is due tomorrow and five other subjects are pressing, the rational move is to delegate to whatever responds fastest. The machine always responds fastest.

Accountability diffusion. If the AI gets it wrong, the student has a shield: "the AI told me." If the student gets it wrong on their own, there is nowhere to hide. Relying on AI creates a psychological buffer between the student and the outcome. It is not trust. It is self-protection.

Fear of falling behind. This is the one that surprised me most. Research across 47 universities and over 1,400 students found that the primary driver of student AI adoption is not curiosity or genuine interest. It is what psychologists call introjected regulation: the internal pressure that comes from guilt, anxiety, and the feeling that everyone else is using it and you will be left behind if you do not. They are not excited about AI. They feel trapped by it.

They are not choosing AI because they believe in it. They are choosing it because the alternative feels riskier.

You are solving the wrong problem

If reliance is not driven by trust, then teaching students to "evaluate AI critically," which is what most school policies recommend, misses the point entirely. You are treating the symptom. The cause is that asking for human help feels riskier than asking a machine.

Which means the problem is not the AI. It is the classroom culture.

A student who is afraid to be wrong in front of a teacher will always prefer to be wrong in private with a machine. No amount of AI literacy training changes that. What changes it is making the classroom a place where struggling visibly is normal, where asking for help is rewarded, and where the teacher is more accessible than the chatbot. Not technically, but emotionally.

I keep coming back to that green and yellow list. The student did not use AI because he trusted it more than me. He used it because I could not be there at midnight when the revision panic set in. The AI could. The question is not how to compete with that availability. The question is how to make the yellow moments, the ones that require a human, so clearly valuable that students learn to seek them out.

The four modes: a diagnostic for your school

Parasuraman and Riley identified four modes of human-automation interaction. I have started using them as a diagnostic in my own thinking about students, and they are more useful than any AI policy I have read.

  1. Use. The student makes a conscious, reasoned choice to engage AI for a specific task. They can explain why. They know what they are keeping for themselves. This is what we want.
  2. Misuse. The student over-relies. They stop checking, stop thinking, stop questioning. The work looks polished. The learning is hollow. This is the mode most school leaders worry about, but it is not the only one.
  3. Disuse. The student rejects AI entirely, usually after a bad experience. They saw the five-legged animal and decided the whole thing is broken. They are leaving capability on the table out of distrust.
  4. Abuse. This one is on us, not the students. It is when we set tasks where AI does the thinking and the student does the formatting. We designed the delegation badly. The system is not being misused. It is being poorly deployed.

The audit question for every school leader: which of these four modes are most of your students in? And which of them did you design for?

When attitude is misdirected, not absent

In my previous article on classroom culture, I argued that the "a" in Noller's creativity equation (attitude) determines everything. If it is zero, the whole thing collapses. But what happens when the attitude is not zero but misdirected? When a student actively, strategically chooses to rely on something they do not trust, not out of passivity, but out of self-preservation?

That is not a zero attitude. That is an attitude pointing in the wrong direction. The "a" outside the brackets is not absent. It is misaligned. And misaligned is harder to fix than absent, because from the outside it looks like engagement.

A student who directs their own intelligence does not just manage AI. They manage themselves. They recognise when they are reaching for the machine not because it is the right tool but because it is the easy one. That is a harder kind of self-awareness than any AI literacy curriculum teaches.

The culture we build around students either enables self-direction or quietly extinguishes it. A classroom where struggle is punished and fluency is rewarded will always produce students who let the machine direct. A classroom where doubt is welcomed will produce students who direct the machine, and themselves.

Two things you can do this term

1. Make human help feel safer than AI help. Not technically. That ship has sailed. Emotionally. If your classroom rewards visible struggle and treats questions as signals of engagement rather than weakness, students will come to you before they go to the machine. If it does not, they will not. No AI policy changes that. Only culture changes that.

2. Ask students to name the mode. After any AI-assisted task, students identify which of the four modes they were in. Not as a judgement, but as a reflective habit. "I was in Use because I chose to get it to summarise the sources so I could focus on the argument." Or, honestly: "I was in Misuse because I was tired and I just let it write the conclusion." The naming is the skill. Once they can see the pattern, they can change it.

The real question schools should be asking

The research on trust calibration came from engineering psychology: aviation, manufacturing, healthcare. The models are rigorous. But the insight that matters for schools is simple. We have been asking whether students trust AI too much. The real question is why they rely on it even when they do not trust it at all.

That student with the green and yellow list was not confused about AI. He had it figured out better than most adults I know. What he needed was not more scepticism. It was a school that understood the difference between the green questions and the yellow ones, and designed for both.

Because the real question was never whether students can use AI. It was whether they can direct their own thinking clearly enough to know when AI helps and when it hides the gap.


References

  • Parasuraman, R. and Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors, 39(2), pp.230-253.
  • Lee, J.D. and See, K.A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), pp.50-80.
  • Hoff, K.A. and Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors, 57(3), pp.407-434.
  • Noller, R.B. (1977). Scratching the Surface of Creative Problem Solving: A Bird's Eye View of CPS. Buffalo, NY: DOK Publishers.

Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss building healthier AI habits in your school.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy