Newsletter

Directors of Intelligence

AI has not broken the creativity equation. It has clarified it. Knowledge is cheap. Execution is accelerating. Direction is scarce.

Matthew Wemyss
Directors of Intelligence

If we are being honest, the ground has shifted faster than any curriculum cycle could ever hope to keep up with.

Ruth Noller wrote her creativity formula decades ago: Creativity is a function of Knowledge, Imagination, and Evaluation, all driven by Attitude

(C = fₐ(K, I, E))

  • C stands for creativity.

  • K is knowledge.

  • I is imagination.

  • E is evaluation.

  • fₐ means creativity is a function of those three elements, influenced or driven by attitude.

For years, this was a neat way of explaining why rote learning felt hollow and why telling students to “be creative” without any judgement did not really work. It lived quietly in education theory. Sensible. Agreeable. Easy to forget.

Then AI makes us ask a much harder question: what happens if a machine can do all of that better than we can?

Knowledge is everywhere now. Imagination used to feel like our last human stronghold, but a model can generate a hundred ideas before a student has even finished reading the task. Even evaluation is being squeezed. Systems can critique, revise, and explain in ways that sound increasingly convincing.

So, we end up in an awkward place. If the machine can know, generate, and polish, what exactly is the learner meant to be doing?

When AI stops responding and starts acting

That question becomes even more urgent once AI stops being something you talk to and starts acting in the world.

Take OpenClaw. It is not another chatbot. It is an open-source, self-hosted, agentic system that has exploded to over social media in the last month or so.

You give it access and a goal, and it plans, executes, checks, and loops.

On the surface, all of this looks like initiative. Until you notice what is still missing. OpenClaw does not decide what matters. It does not know whether something is wise, safe, or appropriate unless someone has already framed that judgement. Security researchers have already identified over 400 malicious skills on ClawHub and GitHub, masquerading as useful tools while stealing credentials and data.

And OpenClaw is just the tip of the iceberg. Google, OpenAI, Anthropic, Meta, and dozens of startups are all building agent systems where AI does not just respond but acts, autonomously, in chains of delegation that most adults do not fully understand, let alone students. These systems do not carry responsibility. We do.

These systems do not actually imagine. They produce. They do not wonder about anything. They do not care.

Three concepts from AI delegation research that should concern every school leader

A recent Google DeepMind paper, Intelligent AI Delegation (Tomašev, Franklin and Osindero, 2026), was written for AI systems. But reading it, I could not stop seeing my students. Three concepts hit hardest:

The zone of indifference. A range of instructions executed without critical deliberation. In students, it is the gap between receiving an AI output and submitting it without scrutiny. When a student accepts every output without question, they are not directing. They are routing.

The authority gradient. When one party is perceived as far more capable, the other stops pushing back. In aviation, steep authority gradients between captains and first officers have contributed to crashes. In classrooms, the same dynamic plays out between students and AI. They defer to fluency, not because they are lazy, but because they have no practised basis for questioning it.

De-skilling. The more you automate routine work, the less prepared the human is to intervene when things go wrong. If students never write a first draft, they cannot judge whether the AI’s draft serves the purpose.

Directors of Intelligence

This is why the phrase Directors of Intelligence matters. Not as a job title. As a posture.

There is a reason Noller’s “a” sits outside the brackets. If you multiply everything by zero, the whole thing collapses. It does not matter how much knowledge is present or how imaginative the outputs appear. If the attitude is passive, if the stance is “just get it done,” the creativity is effectively zero. What you are left with is a polished artefact and very little growth behind it.

Directors of Intelligence do not compete with AI. They set direction. They define values. They judge quality. They decide when to stop. They understand enough about the tools to use them well, and enough about the world to know where they should not be used at all.

AI has not broken the creativity equation. It has clarified it. Knowledge is cheap. Execution is accelerating. Direction is scarce.

Agency, not just agents

I have been using safe and secure chatbots with students for almost two years now. I know the realities that occur when students have access to both school-provided AI and whatever they are using outside of school. I have watched what happens in real time: the hesitation, the over-reliance, the quiet outsourcing of thinking that looks like productivity if you are not paying close attention.

And what I have come to believe is this: the problem is not really about AI at all. AI just made it visible.

A student who has never learned to direct their own thinking will not suddenly learn to direct a machine’s thinking. A student who cannot break a problem down for themselves will not be able to break one down for an AI. A student who does not have the habit of questioning what they read will not question what an AI generates. Agency comes before agents.

This is what makes the Directors of Intelligence idea bigger than an AI strategy. It is about what school is actually for. We are not just preparing students to use tools well. We are preparing them to think for themselves in a world that is making it increasingly easy not to. AI is the context that makes this urgent. But the underlying question, can this young person set a direction, hold to it, and take responsibility for where it leads, has always been the point of education.

We cannot take it for granted any more. Not when a machine will happily do the thinking for you, fluently, confidently, and without ever asking whether you learned anything in the process.

The Directors of Intelligence Audit

Five questions for your leadership team. Score each 1 to 5. If your total is below 15, you are probably designing for compliance, not direction.

  1. Decomposition Can students break a complex problem into parts and decide which parts need their thinking, which could benefit from collaboration, and which might be handed to a tool? This is a thinking skill first. AI just raises the stakes.

  2. Delegation with intent When students hand work over, to a peer, a process, or an AI, can they explain what they are delegating, why, and what they are deliberately keeping for themselves? If they cannot articulate that, they are not delegating. They are abdicating.

  3. Monitoring and verification Are students in the habit of checking outputs against their own judgement, whether that output comes from a group discussion, a textbook, or a language model? Fluency is not accuracy. Students need the reflex to question what sounds right.

  4. Ownership of outcome If you asked a student “Why did you make this choice?”, would they answer with conviction, or point to whatever source made the choice for them? Directed thinkers own their reasoning. That matters long before AI enters the picture, and even more once it does.

  5. Deliberate human practice Are there tasks in your curriculum where students must do the thinking themselves, not because tools are unavailable, but because doing the work is how judgement is built?

The Attitude Check: Three Things This Term

  1. Replace “What did you find?” with “What did you decide?” Focus assessment on choices rather than outputs. When students present work, begin with their reasoning. This makes direction visible.

  2. Introduce the delegation brief: Before any assisted task, create a short brief: what is being delegated, why, what success looks like, and what they will judge themselves. Teach this with the same care as essay writing. If they cannot frame the brief, they are not directing the work.

  3. Protect struggle time: Some tasks must remain deliberately hard, not because help is unavailable, but because doing the work is how students build the judgement to know when help is good enough. If students never draft from scratch, they cannot assess a draft. If they never sit with a problem, they will not spot a weak solution, whatever its source.

What comes next

Directors of Intelligence is going to be a major focus of my work over the coming months. I am developing the framework into something schools can implement at every level: workshop formats, assessment approaches, and a leadership toolkit that connects Noller’s creativity equation to the practical realities of AI in schools.

The real question for 2026 is not “How do we use this tool?” It is “Who are we helping young people become in a world where intelligence is a utility?”

Access is not wisdom. Power is not judgement. And tools will never teach purpose. That part is still our job.

Step 1: Run the audit above with your leadership team this week. It takes 15 minutes. Score honestly.

Step 2: Try the three attitude check actions this term. See what changes.

Step 3: If you want to go deeper, I am building out the full Directors of Intelligence framework for schools. Get in touch and I will share it with you first.

Make your students Directors of Intelligence

I am working with schools internationally to bring this framework to life through keynotes, leadership sessions, and staff workshops. If you want to explore what Directors of Intelligence looks like in your context, let’s have a conversation.

Get in touch: matt@inanded.com

Books: https://amzn.to/3OLmgj4

If this resonated, forward it to one person in your school who needs to read it.

References

Noller, R.B. (1977). Scratching the Surface of Creative Problem Solving: A Bird’s Eye View of CPS. Buffalo, NY: DOK Publishers.

Tomašev, N., Franklin, J. and Osindero, S. (2026). Intelligent AI Delegation. arXiv:2602.11865v1, 12 February 2026.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy