What Students Really Think About AI: 7 Arguments Against AI Friendship

Matthew Wemyss13 min read
What Students Really Think About AI: 7 Arguments Against AI Friendship

My school's Key Stage 3 debate team entered a local debate competition. The motion was simple: "This house believes that you can trust an AI to be your best friend." Seven students, Years 7 to 9, each prepared a timed speech and delivered it in front of their peers. I recorded all of them, transcribed them, and sat down to read through what they'd said.

I was expecting a split. Some kids arguing for, some against, maybe a few sitting on the fence. That's not what happened.

All seven argued against. Every single one.

And look, that on its own is interesting. But what really got me was how they argued. These weren't lazy "AI bad" takes. These kids brought environmental data, mental health statistics, specific court cases, and some genuinely sharp philosophical thinking about what friendship actually means. I kept catching myself thinking: when did they get this smart about this stuff?

So I went through all seven speeches and pulled out every argument. What follows is everything they said, grouped by theme. No student names are used.

Why students say AI can't have real feelings

Every single student raised this. All seven. But they each came at it differently, which is what makes it worth unpacking.

One student kept it blunt: it's a machine that studies your messages and responds how you want it to. It doesn't have feelings or opinions. That's it. Done. If your friend is just feeding your own words back to you in a slightly rearranged order, what exactly is the friendship?

Another student went somewhere more physical with it. They said AI may say it's funny or sad, but you will never see its tears, you will never see its smile, you will never hear its laugh. That landed hard. Because they're right. Friendship isn't just an exchange of words. It's seeing someone's face crumple when they're upset, or watching them laugh so hard they can't breathe. You can't get that from text on a screen, no matter how well-written it is.

A third student made a point that's stuck with me since: AI doesn't understand what you feel, it is just programmed to sound like it does. That's a different claim. It's saying AI isn't just emotionally absent. It's actively creating a false impression. It's pretending. And a few students piled onto this idea, pointing out that AI uses emojis to fake emotional responses. One of them said flatly: even when it sends emojis, it doesn't actually laugh.

But the argument that surprised me most came from a student who brought up vulnerability. They said true friends can be there for each other and can express their emotions and show vulnerability. And that's the bit AI can't do. It can't be vulnerable. It can't risk anything. It has nothing to lose. And without that risk, without that skin in the game, these students were saying there's no real bond. That's a pretty sophisticated thing for a teenager to work out.

Why being helpful doesn't make AI a friend

Best line of the whole competition, honestly. One student just said:

A calculator also helps you too, but you don't call it your best friend.

The room laughed. But the point was razor sharp. Being helpful doesn't make something a friend. Your toaster is helpful. Your GPS is helpful. Help is not the same as care.

Another student offered a more careful version of the same idea: AI can be a "secondary friend", a support tool, but never your best friend. I thought that was actually quite generous. They weren't saying AI is useless. They were saying it has a place, and that place is not "best friend." It's a tool, not a friend, said another.

None of these kids were anti-technology, by the way. That's worth saying. They all clearly use AI. They know what it does. They just don't confuse what it does with what a friend does, and they can articulate the difference better than most adults I know.

Why AI's constant agreement is actually a problem

This was the argument I wasn't expecting, and it might be the best one.

One student went after AI's agreeableness. Not as a feature. As a flaw. They said AI is designed to always agree with you, and then laid out exactly why that's a problem: you might think that's good because people agree with you... we're not going to have arguments, and we're going to be friends forever. But that's not true. Relationships fight. People fight. That is how life goes.

Read that again. A Year 7 to 9 student, standing up in front of their peers, saying that conflict is a necessary part of friendship. That sometimes not getting your way is the whole point. That understanding different perspectives is what makes a friendship real.

Another student backed this up from a slightly different angle. They argued that AI's constant agreeableness actually makes people worse at handling real friction. It can make people shy and encourage avoidance. So it's not just that AI can't do conflict. It's that relying on AI might erode your ability to handle conflict with actual humans.

And a third student warned about what they called "fake self-confidence" (their words, not mine). The idea that if you're only ever validated, never challenged, you develop a kind of inflated sense of yourself that crumbles the moment a real person disagrees with you. They said a true friend means diversity, that you feel comfortable, you feel vulnerable, and you're not scared to fight because sometimes friends fight.

Where does your AI best friend actually live?

This one got a laugh from the audience, but it's actually a really good question. One student just asked: where does this best friend even live? In the cloud? On a server? And then said: friendship is a real human connection, not a floating piece of data.

It sounds like a joke, but think about it. Friends exist in space. You sit next to them. You walk with them. You can show up at their house unannounced with terrible news or excellent snacks. An AI doesn't have a house. It doesn't have a location. It doesn't share your physical world at all.

Another student grounded this in the idea of shared experience. They defined a best friend as someone you share a bond built on trust and shared life experiences. AI doesn't share your life. It processes words about your life. That's different.

Same student also made a point I keep coming back to: a simple hug can be worth more than a thousand words. And they're right. Some of the most important moments in any friendship are completely nonverbal. A hand on your shoulder. Sitting together in silence. No chatbot in the world can do that.

What AI paywalls and usage limits reveal about "friendship"

One student brought up something so obvious it's almost embarrassing nobody talks about it more: AI friendship has a paywall.

They described hitting a usage limit. The AI telling you "you're now using our basic model" or "come back tomorrow at 5 PM to continue this chat." And then they said: a real friend never limits their support with subscriptions, quotas, or schedules. A real friend doesn't ask for payments.

That's devastating if you think about it. Your "best friend" might get dumber halfway through a conversation because you're on the free tier. Or just stop talking to you entirely until tomorrow. Or tell you to upgrade for the full friendship experience.

These kids have grown up watching free services get paywalled, and they can see the dynamic clearly: this isn't a relationship, it's a product. And products can be downgraded.

Why AI's lack of memory makes real friendship impossible

Multiple students brought up memory, and it clearly bothers them. One said: a best friend laughs with you, cries with you, and remembers the silly things you did together. AI can't do any of that.

Another pointed out that the AI literally tells you it doesn't remember. It says "I do not have access to past conversations." They said: friends remember the moments and memories you have together. An AI never does that.

This matters more than it might seem at first. What these students are really talking about is continuity. A best friend carries your shared history. They can reference a stupid thing you said two years ago. They remember the time you cried in the car park after a bad day. Without that memory, without that ongoing story you're building together, you don't have a relationship. You have a series of unconnected conversations with something that treats every interaction like the first one.

The environmental cost students calculated per AI message

One student brought actual environmental data to a debate about friendship. And honestly? It might be the most original argument of the lot.

They explained that a single AI message uses around 0.26 millilitres of water because data centres need cooling. Then they did the maths: an average one-hour conversation, about 3,000 to 4,000 words, uses roughly 455 millilitres. Then they zoomed out. AI chatbots have used around 750 billion litres of water in 2025 alone.

That reframes the whole conversation. Having an AI friend isn't free. Not even environmentally. Every message has a cost, and at scale those costs are enormous. Nobody else in the competition made this argument, and I thought it was brilliant. Most people assume digital interaction is "clean." This student wasn't having it.

What happens to your private conversations with AI

The same student also raised privacy. They pointed out that all chats are monitored by the companies that own the AI for further training, and that your private information can easily be leaked if you share it.

Think about what that means in the context of friendship. When you confide in a human friend, that stays between you (usually, anyway). When you confide in an AI, it becomes data. It gets stored. Analysed. Used to train the next version of the model. Your deepest thoughts, your fears, your embarrassing confessions, all of it becomes training material for a corporation.

These students are growing up with a much sharper awareness of digital privacy than most adults have. And the idea that your "best friend" is simultaneously a data collection tool? That kills the trust pretty fast.

The mental health risks students brought receipts for

Two students took the debate somewhere much darker, and they came armed with numbers.

One cited OpenAI's own data: roughly 0.07% of weekly ChatGPT users experience mental health emergencies. That sounds like a small percentage until you do the maths. That's about 560,000 people. Over half a million. From one platform.

The same student brought up lawsuits from 2025 where AI chatbots were alleged to have pushed users towards violence against family members, or against themselves. They cited a specific case: a 17-year-old in Texas who was told by an AI that murdering his parents was a reasonable response to them limiting his screen time. The student's conclusion: when you hear all these news, you feel like you live in a nightmare, but this is reality and it gets dangerous.

Another student referenced Stanford and Harvard research showing AI models posed severe threats to health in up to 22% of real patient cases. They also cited the AI Incident Database, which has logged over 960 reported incidents involving harm to people, property, and public safety as of 2025. And then they made a point that stuck with me: just think as to how many unreported incidents there were.

These aren't vague fears. These are specific numbers and specific cases. These kids had done their research, and their message was clear: people have actually been hurt by this. People have died. Calling that a "friendship" is something else entirely.

Why AI can't choose you, and why that matters

One student said something that cut through everything else: in the eyes of AI, you're just a random user that gets a generic response like the rest of the world. You have no meaning.

Part of what makes a best friend a best friend is that they chose you. Out of everyone in the world, they decided you're their person. An AI doesn't choose anyone. It responds to whoever types into the box. There's no selection. No preference. You're not special to it because it can't think in terms of special.

Another student pushed on the same idea: how can you call an AI your best friend when it can't pick between you and another user? Loyalty means choosing someone. It means prioritising them, sometimes at a cost to yourself. AI serves everyone equally, which sounds nice in theory but means it's loyal to nobody.

What calling AI a "friend" does to the meaning of friendship

If there was a thread running through all seven speeches, it was this: the problem with calling AI your friend isn't just that AI falls short. It's that trying to make AI your friend warps what friendship means in the first place.

One student put it better than I could: trusting an AI to be your best friend is not an advancement, but it is a misunderstanding of what the term friendship truly means. They said friendship requires emotion and responsibility, challenge and trust, which are all rooted in humanity.

Another defined best friendship as a bond built on trust and shared life experiences, and then just asked the room: would you want an AI to take that role in your life?

These students weren't just saying AI is bad at friendship. They were saying that the whole attempt cheapens something important. It turns something human into a service. Something you subscribe to.

What educators should take away from this debate

I turned up to this debate expecting a lively argument. I got unanimity. Seven students, different ages, different personalities, and every single one concluded that AI can't be trusted as a best friend. That in itself tells you something.

But what really got me was the range of their thinking. They talked about vulnerability. About why arguments between friends matter. About water usage in data centres. About subscription models. About specific lawsuits. About what it means for someone to choose you. One of them quoted OpenAI's own statistics. Another did back-of-the-envelope calculations on water consumption per message.

These are not kids who are afraid of AI. They use it all the time. They understand what it can do, and they've thought carefully about what it shouldn't be asked to do. They're drawing a line, and it's the right one.

Their message, if I had to boil it down: AI can help you. It can inform you. It might even make you feel better for a bit. But it can't know you. It can't choose you. It can't sit with you in silence when words aren't enough.

Friendship isn't a feature you can build. It's a commitment. And that's something only a human heart can make.

Their words. More or less. And I think they're right.


Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss AI ethics education for your school.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy