Our Brains on ChatGPT – MedPage Today

An MIT Media Lab study published this past summer has ignited an unusually intense debate about what happens to the human brain when we use tools like ChatGPT. Depending on what you read, authors interpreted the findings either as evidence of creeping cognitive decline or as an encouraging sign that our brains are evolving into more efficient systems. As is often the case in medicine, both interpretations capture something true, yet neither tells the whole story.
The study itself was modest in scope. Fifty-four young adults — mostly college students and recent graduates — were asked to write short SAT-style essays under three conditions: using only their own brains, using a traditional (Google) search engine without artificial intelligence (AI), or using ChatGPT. All subjects wore EEG headsets to measure neural activity. The results were immediately provocative. Brain-only writing generated the strongest and broadest neural engagement across regions associated with memory, language processing, and attention. Search users showed a mild decline. ChatGPT users showed the greatest drop, with more than a 50% reduction in several cognitive regions. Over repeated sessions, those relying on AI increasingly copied AI-generated material and later struggled to recall or quote what they had written once the tool was removed.
From these patterns, the MIT authors raised concerns about what they called “cognitive debt” — the possibility that repeated reliance on AI for complex tasks may interfere with learning, memory formation, and critical thinking. And for many clinicians and educators, that concern rings true. If we outsource the most cognitively demanding parts of a task, the brain will adapt by doing less of that work. Neuroplasticity does not distinguish between convenience and regression; it simply rewires to meet current demands.
Opposing Responses
The online response to the findings has been anything but unified. A sizeable group argues that lower EEG activity does not necessarily indicate disengagement or decline. They point to well-known neuroscience parallels: jazz musicians in peak improvisational flow exhibit reduced prefrontal activation; expert meditators show quieter cortical patterns during deep concentration; and seasoned surgeons often demonstrate a paradoxical calm during technically demanding procedures. From this perspective, reduced activation might reflect cognitive economy rather than weakness, the brain allocating resources more efficiently while AI handles the mechanical layers of writing.
Others push back for an entirely different reason: the study itself is simply too thin to support sweeping conclusions. Fifty-four students writing artificial exam prompts is hardly a basis for pronouncements about the future of human intelligence. EEG, while useful, lacks the specificity to measure higher-order reasoning or creativity. And as several critics note, the study has not yet undergone peer review. They argue that the public conversation has ballooned far beyond what the data meaningfully support.
A third group tries to integrate both sides. These commentators emphasize that the central issue is not whether AI is good or bad, but when and how it is used. They observe that students and inexperienced writers often turn to AI before they have formed their own thoughts. Under those conditions, AI becomes a replacement for thinking rather than an enhancer of it. But when used after generating one’s own ideas — when a writer first sketches an argument, identifies uncertainties, or reflects on their perspective — AI can reduce anxiety, clarify structure, and accelerate revision. In other words, the problem is not AI itself, but premature offloading.
The Clinical Perspective
From a clinical perspective, this middle view may come closest to the truth. It is not surprising that brain-only writing results in greater neural activation; composing a coherent essay requires simultaneous memory retrieval, idea generation, organization, and error correction. When ChatGPT handles much of that workload — suggesting phrasing, structuring paragraphs, smoothing transitions — the cortex simply does not have to work as hard. That is not inherently harmful. It is exactly what we see with calculators, GPS devices, and spell-check. The brain does less in areas that are outsourced and reallocates energy elsewhere.
What is more concerning is the pattern over time. In the MIT study, repeated use of AI led to less engagement, lower ownership, and poorer recall. Participants who initially wrote their own essays and then shifted to AI showed an uptick in cognitive effort as they integrated a new tool. But those who used AI from the start and then were forced to return to brain-only writing performed worse and demonstrated weaker neural connectivity compared with those who had never used AI in the first place. In medicine, we would describe this as deconditioning. Occasional reliance on a tool is harmless; habitual dependence is not.
The novice-expert distinction is also critical. Experienced clinicians have robust internal models for decision-making, pattern recognition, and synthesis. When they use AI, they tend to use it as a second opinion, a sparring partner, or a source of alternative framing. Novices, by contrast, may not yet possess the mental structures needed to evaluate, question, or refine AI output. For them, AI’s polished language can masquerade as genuine understanding. That is not an indictment of the technology; it is a developmental vulnerability in learners, which is why experts recommend that mental skills used by seasoned clinicians and acquired over years of practice be taught to residents.
No wonder educators worry about the “blank-page problem.” When a student consults an AI assistant before having any thoughts of their own, the cognitive work shifts from generating ideas to selecting among them. The student becomes a curator rather than a thinker. The work may look clean, even impressive, but the underlying neural circuits for reasoning, analysis, and self-expression are underused or not used at all.
Implications for Use of AI in Medicine
None of this means that AI should be avoided. But it does mean that we need to use it deliberately. One practical habit gaining traction among educators is simple: write one sentence — just one — before opening any AI tool. That moment of anchoring preserves ownership and activates the cognitive networks associated with initiative and idea formation. Only after that should AI be used to critique, expand, or refine.
In clinical education, we will likely need to formalize similar guardrails. Some tasks should remain explicitly AI-free, just as we still expect residents to interpret ECGs and some imaging findings on their own. Not because software is unhelpful, but because humans must develop the cognitive scaffolding to understand what the software is doing. We also need assessment strategies that evaluate reasoning, not just polished output. Oral examinations, case discussions, and reflective writing can reveal whether the internal work has actually occurred.
For practicing clinicians, AI is already becoming a meaningful collaborator. Used wisely, it can summarize evidence, offer alternative explanations, test hypotheses, and clarify communication. But even for experts, the guiding principle remains the same: think first, tool second.
The MIT study should not be taken as a definitive warning sign, but neither should it be dismissed. It raises an important question about the future of cognitive autonomy: What aspects of thinking are we willing to delegate, and what must remain ours? If AI is to elevate human intelligence, the partnership must be intentional. A quieter brain is not a problem. A disengaged one is.
Arthur Lazarus, MD, MBA, is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of numerous books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is Standard of Care: Medical Judgment on Trial.
The material on this site is for informational purposes only, and is not a substitute for medical advice, diagnosis or treatment provided by a qualified health care provider.
© 2005–2025 MedPage Today, LLC, a Ziff Davis company. All rights reserved.
MedPage Today is among the federally registered trademarks of MedPage Today, LLC and may not be used by third parties without explicit permission.