AICognition

The Cognitive Bargain

As AI gets smarter, are we getting dumber? Both sides have evidence. Neither has the full picture. And the experiment is already running — on all of us.

Elena Voss

There's a study that should unsettle anyone paying attention. Shen and Tamkin at Anthropic asked 52 developers to learn a new programming library — half with AI coding tools, half without — and measured what happened to their understanding of the code they produced. The AI group scored 17% lower on conceptual questions — with debugging showing the largest gap. Here's the detail that matters: AI-assisted developers encountered a median of 1.0 errors during development, compared to 3.0 for the control group.1

The AI didn't just write their code. It stole the errors that would have taught them something.

This is not a story about AI making us lazy. It's a story about a far stranger trade — one where the cost is invisible precisely because the benefit is real.

The Paradox Nobody Wants

The standard framing goes like this: AI makes us dumber, or AI makes us smarter. Pick your camp, find your studies, argue on the internet. But the actual evidence refuses to cooperate with either side.

Consider the Brynjolfsson study — one of the largest field experiments on AI and productivity. Among 5,172 customer service agents, AI assistance boosted productivity by 15% on average, with less-skilled workers seeing gains of roughly 30%. More interesting: workers with two months of AI-assisted experience performed as well as workers who had been on the job for over six months without AI access. Even when AI was unavailable, the gains persisted — suggesting genuine learning transfer, not dependency.2

So AI builds skills. Except when it doesn't. A study of 666 participants found that frequent AI tool usage showed a significant negative correlation with critical thinking abilities, mediated by cognitive offloading. Younger participants — ages 17 to 25 — showed the highest AI dependence and the lowest critical thinking scores.3 Microsoft Research found something complementary: higher confidence in generative AI is associated with less critical thinking, while higher self-confidence is associated with more.4

These studies measure different things in different populations — customer service agents gaining domain expertise versus survey respondents reporting on their thinking habits. They're not strictly contradictory. But that's precisely what makes the picture unsettling: the variable that determines whether AI builds you up or hollows you out isn't the technology. It's you. Specifically, it's whether you're using AI as scaffolding or as a replacement for the structure itself.

The Automation Irony

Lisanne Bainbridge described this in 1983, long before anyone was worried about ChatGPT. Her paper on the "ironies of automation" made a deceptively simple observation: experienced operators' physical skills deteriorate when they are not used, which means a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one.5

The evidence has aged well. A survey of commercial airline pilots found that 79% reported their manual flying skills had deteriorated from operating automated aircraft. That number understates the problem. When 30 pilots were objectively tested on five basic instrument maneuvers without automation, all performed below ATP certification levels — every single one, not just the 79% who suspected it.6

These aren't amateurs. They're professionals who once possessed the skills, then watched them erode because the system no longer required them.

GPS tells the same story at street level. A study of 50 drivers found greater lifetime GPS experience associated with worse spatial memory. A three-year follow-up showed greater GPS use associated with steeper decline in hippocampal-dependent spatial memory — dose-dependent, with self-selection ruled out.7

The pattern is consistent enough to name: cognitive skills don't decline with age for people who use them throughout their lives — above-median usage means researchers observed no skill loss across the entire age range studied. Below-median usage, and decline begins in the mid-30s.8

Skills are use-it-or-lose-it. Everyone knows this about muscles. Almost nobody applies it to thinking.

The Printing Press Didn't Kill Us (But It Killed Something)

The reflexive counter-argument arrives on schedule: we've heard this before. Socrates warned in Plato's Phaedrus that writing would produce "forgetfulness in the minds of those who learn to use it."9 He was right — and it didn't matter. Civilization advanced anyway.

This is true, and it's also too comfortable. Eisenstein argued that the printing press enabled "cumulative cognitive advance" — scholars could lay texts side by side for the first time, enabling cross-referencing and systematic comparison, catalyzing the Scientific Revolution. The press destroyed memorization and scribal skills but created entirely new cognitive practices.10 Global adult literacy rose from roughly 12% in 1820 to over 87% today.11 The trade was clearly worth it.

But notice what's being assumed: that every cognitive trade works out this neatly. That destruction of old skills always coincides with construction of new ones. The printing press replaced memorization with analytical comparison — a genuine cognitive upgrade. The question for AI is whether "prompt engineering" and "output verification" represent a comparable upgrade over the skills they're displacing. Early evidence from MIT's Media Lab is not encouraging: in a preprint reporting EEG data from 54 participants, the LLM-assisted group showed reduced brain connectivity — up to 55% in the most affected regions, though this represents the upper bound — and 83% of LLM users were unable to accurately recall content from essays they had just written with AI assistance. When switched to writing without AI, they showed weaker neural connectivity than they had before using it — what the researchers called "cognitive debt."12 The study has not yet been peer-reviewed, but the pattern it describes is consistent with the broader offloading literature.

The printing press analogy has another flaw. When Gutenberg's press displaced scribes, it simultaneously created millions of readers. The skill floor rose even as the skill ceiling for calligraphy fell. AI's early trajectory looks different — a Nature Human Behaviour meta-analysis of 106 studies found that human-AI combinations showed greater performance gains in content creation tasks but performance losses in decision-making tasks. Overall, human-AI combos performed worse than the best of humans or AI alone.13

We're not yet seeing the new cognitive skills that justify the old ones being abandoned.

You Can't Atrophy What Was Never Built

Everything above concerns adults — people who built cognitive skills and then watched them soften. This is cognitive atrophy, and it's mostly reversible. Stop using GPS for a few months, and your spatial reasoning recovers. The neural pathways are dormant, not destroyed.

Children face a different problem entirely. Psychology Today draws the distinction between cognitive atrophy in adults and "cognitive foreclosure" in children — the failure to build skills in the first place. As they put it: "You can't atrophy a muscle that was never built."14

This lands differently when you consider the evidence on learning. Bjork's "desirable difficulties" framework — supported by underlying studies on spacing and retrieval practice — shows that conditions making learning harder produce better long-term retention.15

The productive struggle is the learning. AI removes the struggle.

For an adult with existing foundations, this might cost some sharpness. For a child who hasn't built those foundations, it might prevent them from forming at all.

The numbers from education are suggestive, though they require careful interpretation. PISA 2022 scores showed math falling a record 15 points and reading falling 10 points across OECD countries, with US math scores hitting their lowest in PISA history.16 These tests were administered in late 2022, before widespread AI adoption — the decline is primarily a COVID-era disruption effect, not an AI one. But they establish the baseline from which AI enters the picture: a generation already contending with significant learning loss. As of late 2025, roughly 15% of Turnitin essay submissions had over 80% AI-generated writing — up from approximately 3% in April 2023, a five-fold increase in around 30 months.17

This is where the printing press analogy breaks down most completely. Writing replaced memorization with literacy — children traded one skill for a more powerful one. There is no evidence yet that children are trading away essay-writing, mathematical reasoning, or critical analysis for some new cognitive capability that AI uniquely enables. They may simply be trading them for convenience.

The Jevons Problem

Here's where the optimists have their strongest card. The Jevons Paradox, applied to cognition, suggests that cheaper intelligence expands the total market for intelligence — including human intelligence.18 When ATMs were deployed at scale, tellers per branch fell from 20 to 13 between 1988 and 2004, but over that same period banks opened 43% more branches. Tellers became part of "relationship banking teams" — routine skills automated, human skills elevated.19

Clark and Chalmers's Extended Mind Thesis makes a philosophical version of the same argument: if a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is part of the cognitive process.20 Under this view, worrying about AI replacing cognition is like worrying that your notebook is replacing your memory. The notebook is your memory — extended into the world.

These are serious arguments. But the research on cognitive offloading introduces an awkward wrinkle: under enforced maximum offloading, freed cognitive resources can counteract the negative impact on memory. But under normal free-choice conditions — the way people actually use tools — "released resources do not contribute to the formation of memory representations."21

The Extended Mind Thesis describes what could happen. Cognitive offloading research describes what does happen. The gap between the two is where the real risk lives.

Pinker's cognitive niche theory holds that humans evolved to fill a survival niche defined by tool use, causal reasoning, and cooperative knowledge-sharing — that tools are the expression of what human cognition evolved to do.22 Fair enough. But there's a difference between tools that extend cognition and tools that replace it. A telescope extends your vision. An audiobook narrated while you sleep replaces the act of reading entirely — not because you chose not to read, but because the effort was never required.

The Uncomfortable Center

The honest position is this: AI probably delivers civilization-level productivity gains. It almost certainly causes individual-level cognitive costs. And these are not contradictory — they're the same phenomenon viewed from different altitudes.

The Brynjolfsson data shows real learning transfer for workers who engaged actively with AI assistance.2 The Anthropic developer study shows real skill erosion for people who let AI handle the hard parts.1 The variable isn't AI. It's engagement. But here's what the article's own evidence makes hard to ignore: the cognitive offloading research shows that under free-choice conditions — the conditions that actually govern how most people use tools — people systematically choose not to engage.21 The "it depends on how you use it" framing is technically true and practically inadequate. At the population level, the default is disengagement.

For adults, this is a recoverable problem. Skills atrophy; skills can be rebuilt. For children — the ones submitting AI-generated essays at five times the rate they were 30 months ago17 — the math is less forgiving.

You cannot rebuild what was never constructed.

The question was never whether AI makes us smarter or dumber. It does both, simultaneously, depending on how it's used. The real question — the one that should make us nervous — is whether we're building a world that systematically selects for the version that makes us dumber. A world where the path of least cognitive resistance is always available, always frictionless, always the default. Not because anyone chose that outcome, but because nobody chose to prevent it.

Socrates was wrong about writing. The printing press proved that much. Whether he was wrong about the underlying principle — that convenience, left unchecked, erodes the capacities it claims to serve — is a question we are only beginning to answer.

Elena Voss
Elena Voss
Writes about technology and the things it quietly changes about us.
References
  1. Measuring AI's Impact on Computer Science Education: LLM-Assisted Learning and Assessment in an Introductory Course — arXiv, Feb 2026
  2. Generative AI at Work — Quarterly Journal of Economics, 2025
  3. AI tools may weaken critical thinking skills by encouraging cognitive offloading, study suggests — PsyPost (reporting on Gerlich, Societies/MDPI)
  4. The Impact of Generative AI on Critical Thinking — Microsoft Research, 2024
  5. Ironies of Automation — Wikipedia (Bainbridge, 1983)
  6. Diminishing Skills — Flight Safety Foundation
  7. Lifetime GPS use and spatial memory decline — Nature Scientific Reports, April 2020
  8. Cognitive skills and lifetime usage — Science Advances, 2025
  9. Socrates on Writing in Plato's Phaedrus — History of Information
  10. The Printing Press and the Information Revolution — RAND Paper P-8014, 1998
  11. Literacy — Our World in Data
  12. Your Brain on ChatGPT — MIT Media Lab, 2025 (preprint)
  13. Human-AI performance meta-analysis — Nature Human Behaviour, Oct 2024
  14. Adults Lose Skills to AI; Children Never Build Them — Psychology Today, March 2026
  15. Desirable difficulty — Wikipedia (Bjork)
  16. PISA 2022 results — World Economic Forum, Dec 2023
  17. What 2025 generative AI trends reveal about student behavior — Turnitin, 2025
  18. Jevons Paradox and AI — MindStudio Blog
  19. Toil and Technology — IMF Finance & Development, March 2015
  20. Extended Mind Thesis — Wikipedia (Clark & Chalmers, 1998)
  21. Cognitive offloading and memory — PMC, 2021
  22. The Cognitive Niche — PMC, 2010
The Cognitive Bargain — Undercurrent