The Cognitive Paradox: design AI use around cognitive objectives, not convenience or speed
(The Skinny on AI for Education #25 February 2026 | Professor Rose Luckin's Educate Ventures Research | AI for Education )






In my January Skinny editorial, I argued that the human side of AI matters more than the technology. I drew on the Challenger disaster to illustrate the normalisation of deviance, and on the McKinsey 88/6 gap to show that adoption without investment in people produces adoption without value.

This month, I want to propose an answer.

Study after study, from Harvard Business Review to neuroscience journals to the companies building AI themselves, converged on a single theme: the way we are using AI risks undermining the very cognitive capacities we need most.

This is not a counsel of despair, nor a call to reject the technology. AI is powerful, and used well, it can genuinely transform learning and work. But “used well” is doing a great deal of work in that sentence. The central argument of this editorial is that we should design AI use around cognitive objectives, not convenience or speed. If we get the design right, AI augments human capability. If we optimise only for efficiency, the evidence suggests we will erode the thinking skills that make AI useful in the first place.

In Brief: The ‘Skinny-Skinny Editorial’ 60-second version

For me, the most revealing piece of research published this month came from the Harvard Business Review. Two researchers at UC Berkeley studied how AI agents were changing work habits at a US technology company. They found that workers using agentic AI tools were working longer hours, taking on a broader range of tasks, and operating at higher intensity throughout the day. The critical detail: none of this was mandated by the employer. Workers voluntarily filled every gap between meetings with AI-initiated tasks, setting multiple projects in motion simultaneously, returning to check on each between emails and calls. By the end of the day, the AI agents had completed half a week’s work. The humans were exhausted.

LSE professor Luis Garicano gave this a name: a Jevons Paradox for work effort. Jevons observed in 1865 that making coal more efficient did not reduce consumption; it increased it, because efficiency made coal useful for more things. Garicano’s insight is that the same dynamic may apply to AI and cognitive work. When everything on your to-do list suddenly seems achievable, the temptation is not to rest. It is to keep going.

The FT’s John Burn-Murdoch, writing in the AI Shift newsletter, confirmed this from personal experience. He described using agentic AI tools as exhilarating and exhausting in equal parts, noting that the knowledge that five more minutes of work could set off a task that would previously have taken an hour created a compulsion to keep going. His colleague Sarah O’Connor added an important observation from call centres: as AI handles the routine queries, the work that flows to humans is increasingly the most complex, emotionally charged, and draining. The simpler calls that once provided mental breathers have been squeezed out.

The Berkeley researchers identified a further risk they called “cognitive debt.” When AI-assisted projects move faster than the humans in the loop can track, understanding accumulates a deficit. People sign off on work they have not fully absorbed. Decisions are made on outputs they have not properly evaluated. The work gets done. The thinking does not.

This matters for education because the same dynamic is visible in our sector


Teachers are not simply being relieved of drudgery; they are often given more to do with the same hours. AI tools generate lesson plans, but someone must review them. AI marks essays, but Anthropic’s own data shows that nearly half of teachers who delegate grading to AI do so fully, despite recognising it is ill-suited. The risk is not that AI is inherently harmful. It is that without deliberate design, the default pattern of use prioritises volume over depth.

The attention crisis: a pre-existing condition that AI could worsen

 

It is important to be precise about causation here. The decline in sustained attention is not something AI created. It is a trend that has been building for two decades, driven by smartphones, social media, and an information environment that rewards brevity and novelty. But AI arrives into that weakened landscape, and the way it is currently being adopted risks accelerating the decline rather than reversing it.

Writing in the FT’s Free Lunch newsletter this month, Tej Parikh drew together a body of neuroscience research that paints a sobering picture. Since 2004, the average time a person stays focused on a single task has dropped from about 2.5 minutes to roughly 47 seconds, according to data tracked by Gloria Mark, professor of informatics at the University of California, Irvine. A 2022 survey by King’s College London found that 49 per cent of UK adults feel their attention span is shorter than it used to be. Forty-seven per cent feel “deep thinking” has become a thing of the past. Global cognitive health indices have been declining across this entire period.

The mechanism is well documented. Platforms and media optimise for shorter, more stimulating content. Audiences adapt to that rhythm. The next generation of content must be even shorter and more intense to compete. As Pierluigi Sacco, professor of biobehavioural economics at the University of Chieti-Pescara, notes: the brain adapts to the reward structure it encounters. When the dominant information environment delivers constant novelty in small, high-stimulation doses, the capacity for sustained attention does not just go unused. It becomes harder to deploy.

Niels Van Quaquebeke, professor of leadership at Kühne Logistics University, calls this the “Duolingo-isation of education”: tiny, gamified tasks, streaks, badges, and endlessly bite-sized exercises. Highly engaging. Highly scalable. And potentially hollow. A viral social media post captured the point: someone with a 1,200-day Duolingo streak could barely string sentences together when they visited Spain. The engagement was there. The learning was not.

Neuroscientist Mithu Storoni, author of Hyperefficient, warns that offloading too much cognitive effort to AI risks weakening the mental capacities for synthesis, contextual judgement, and curiosity. This is a risk, not a certainty, and the outcome depends heavily on how AI is used. But Anthropic’s own analysis suggests the default pattern is not encouraging: students are using Claude in a purely transactional way, generating assignment answers rather than engaging in the kind of dialogue that would develop understanding. Seven per cent of teachers’ prompts were for grading, with nearly half fully delegating the task to the system despite recognising it was not well suited to it.

We have been here before with a previous generation of technology. In 2011, researchers identified the “Google effect”: humans began treating the internet as an external memory store, remembering fewer easily searchable facts. At the time, some argued this freed up working memory for higher-order thinking. The evidence since then has been mixed; some studies suggest that storing less information can also lead to shallower thinking, because you have less raw material to reason with. AI extends this dynamic. It does not just store our facts. It can do our reasoning, our drafting, our analysis. The question is whether we let it replace those capacities or use it to develop them.

Even the people building these systems recognise the tension. At the AI Impact Summit in Delhi this month, Sir Demis Hassabis, head of Google DeepMind and winner of the 2024 Nobel Prize in Chemistry, told the BBC that STEM education remains important and that as AI takes over writing code, the key capabilities become “taste and creativity and judgement.” He also called for urgent research into AI threats and acknowledged that keeping up with the pace of development was “the hard thing” for regulators. His diagnosis is right. If the critical human capabilities are judgement, creativity, and taste, then we need educational approaches that actively develop those capacities. The evidence in this editorial suggests that the default trajectory does the opposite: an information environment that rewards speed over depth, engagement over understanding, and volume over quality. But this is a design problem, not an inevitability. If we are deliberate about how AI is used in learning, the paradox can be resolved.

What the safety departures signal

 

In January, I introduced the concept of the normalisation of deviance: pressing ahead while ignoring warning signs, because nothing has gone catastrophically wrong yet. A month later, the people whose job it was to prevent that from happening are walking out.

In the same week in February, senior safety staff departed both OpenAI and Anthropic with public warnings. OpenAI researcher Zoë Hitzig published an essay in the New York Times arguing that personal data, including medical fears, relationships, and beliefs, would be weaponised by the advertising model OpenAI was now pursuing. She drew an explicit parallel with Facebook. Anthropic’s safeguards lead warned of organisational pressure to override values. These are not external critics. They are the people who were employed specifically to make these systems safe, and they concluded they could no longer do so from the inside.

This continues a pattern stretching back to Geoffrey Hinton’s departure from Google. The AI safety community has lost significant talent at the moment it is needed most. And the new initiative by former OpenAI policy chief Miles Brundage, who has founded Averi (the AI Verification and Research Institute) to establish independent auditing standards for AI systems, underscores the point: the people who understand these systems best believe that internal safety mechanisms are insufficient and that independent external scrutiny is essential.

For educators, the implication is straightforward. We cannot assume that the tools we adopt have been designed with learner wellbeing as a first principle. The companies building them are losing the very people tasked with ensuring that. The duty of care does not transfer to the vendor. It remains with us.

What this means for learning professionals

 

In January, I set out three priorities: teach how AI behaves, contextualise learning in real tasks, and address the human factors. Those priorities stand. But the evidence this month points to a principle that should sit above all of them:

Design AI use around cognitive objectives, not convenience or speed.

 

This means three things in practice.

First, teach cognitive load management alongside AI literacy. If the Jevons Paradox holds for work effort, then giving people AI tools without teaching them to manage the resulting intensity is a recipe for burnout. This is not a soft skill or a nice-to-have. It is the binding constraint on whether AI adoption produces value or exhaustion. Every AI training programme should include explicit guidance on when to use AI, when to stop, and how to recognise the signs of cognitive overload. The Berkeley researchers recommend reintroducing structured breaks, reflection time, and deliberate pauses into the working day. For teachers and students, this might mean explicit “AI-off” periods within lessons, time to think without tools, or structured peer discussion that cannot be delegated.

Operational step: Build “cognitive rhythm” into AI-assisted workflows. Alternate between AI-supported tasks and tasks that require unassisted thinking. The call centre research shows what happens when you remove the mental breathers: the work becomes unsustainably intense. Do not let the same happen in your classroom or training programme.

 

Second, design for depth, not just efficiency. The Duolingo-isation of education, bite-sized, gamified, endlessly scrollable, is the enemy of deep learning. AI tools can produce fluent text, plausible lesson plans, and serviceable feedback at remarkable speed. But speed is not the same as quality, and volume is not the same as understanding. If we allow AI to compress every learning interaction into the shortest possible form, we risk producing learners who can generate but cannot evaluate, who can prompt but cannot think.

Operational step: For every AI-assisted task, ask: what cognitive work is the learner doing? If the answer is “checking output” rather than “thinking through a problem,” the task design needs to change. The Anthropic data showing students using AI transactionally is not a failure of the tool. It is a failure of task design. Use AI to make harder problems accessible, not to make easy problems disappear.

Third, exercise independent judgement on safety and wellbeing. The departures of senior safety staff from the two leading AI companies are a signal that should be taken seriously. If the people employed to make these systems safe believe internal mechanisms are insufficient, schools and training providers should not assume otherwise. Review your AI acceptable use policies. Evaluate the tools you use against the evidence, not the marketing. The Averi initiative may in time provide an independent auditing framework. In the meantime, apply the same rigour to AI tool adoption that you would to any other safeguarding decision.

Operational step: Establish a standing review of every AI tool used in your institution. Ask three questions: What data does it collect? What safeguards does the provider have in place, and who is accountable for them? And what happens to our learners if this tool changes, degrades, or disappears? If you cannot answer all three with confidence, the tool is not ready for your setting.

The courage to think slowly

 

In January, I ended with the Challenger engineers who had the courage to speak up. This month, I want to end with a different kind of courage: the courage to think slowly in an age that rewards speed.

The cognitive paradox is real, but it is not a death sentence. It is a design challenge. AI does not have to degrade our thinking. It does so when it is adopted without attention to how people actually learn, focus, and make decisions. The same technology, used with deliberate cognitive objectives, can do the opposite: it can free up time for deeper work, surface harder questions, and make complex problems more tractable.

The difference between those two outcomes is not the model, the vendor, or the prompt. It is whether someone in the room asked, before the tool was deployed: what do we want people to be thinking while they use this?

William Stanley Jevons warned us 160 years ago. Efficiency does not reduce demand. It increases it. The question for education is not whether AI will change how we think. It already is. The question is whether we will be deliberate about the direction.

The 88 per cent of organisations using AI will not all become the 6 per cent generating real value. The ones that do will not be those with the most sophisticated models or the fastest agents. They will be the ones that invested in the cognitive resilience of their people: the ability to focus, to evaluate, to resist the pull of the next notification, and to think carefully before acting.

The technology is ready. The question is whether we will design its use around the thinking that matters most.