In this issue of AI Unfiltered, we explore a provocative question: Is AI making us dumber—or are we just using it wrong? Drawing from a recent MIT study on cognitive engagement and AI-assisted writing, we unpack how leaders can leverage AI as a force multiplier for creativity and strategic thinking, without undermining human judgment.
The question “Is AI making us dumber?”, in various forms, has surfaced repeatedly in recent conversations I’ve had with Chief Data Officers, CEOs, and business leaders, whether in one-on-one discussions or broader forums. But the real issue isn’t just whether AI diminishes our thinking. It’s about how and where we apply it. At the heart of the conversation is this: Are we using AI to complement human creativity, or are we unintentionally outsourcing the very thinking that drives innovation?
A recent MIT-led study titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” offers a scientific lens into how AI affects the brain during cognitively demanding tasks. Released in early June, the study explored the consequences of using AI tools in an educational setting, specifically, for essay writing. So, let’s dig into what the study found, where its limitations lie, and what, if anything, it means for how businesses should be using AI.
The study used a mixed-methods approach to understand how tools like ChatGPT influence thinking. Participants were asked to write essays under three different conditions: using ChatGPT, using a search engine (Google), or going the “Brain Only” route. Researchers tracked brain activity using electroencephalogram (EEG), analyzed the language used in the essays, and gathered feedback on how much ownership participants felt over their work. This combination of neuroscience, linguistic analysis, and self-reporting helped assess not just the output quality, but how engaged participants were in the process.
It’s worth noting that the study had a relatively small sample size (54participants, with only 18 completing the final sessions), focused on a single task (essay writing), and did not assess long-term effects. Even so, the findings raise important considerations.
A few key themes emerged:
1. AI Can Diminish Cognitive Engagement
EEG data revealed that participants using ChatGPT showed significantly lower brain connectivity, particularly in areas tied to memory and executive function, compared to those writing unaided. For those unfamiliar, EEG is a non-invasive method for measuring connectivity between brain regions, essentially showing how “switched on” the brain is during different tasks. The takeaway: when we default to AI, we may undercut the kind of mental effort that drives learning, insight, and strategy.
2. Overuse of AI May Lead to Reduced Originality
Essays generated with ChatGPT were often flagged by human graders as “soulless” and “bland,” with repetitive phrasing and structure. While they tended to be more information-dense, they lacked the variety and personality that came from human-first approaches, raising concerns about idea diversity and innovation when AI is overused.
3. Reduced Ownership Undermines Retention
An eye-opening 83% of ChatGPT users couldn’t recall quotes from their own essays just minutes after writing them. Only 10% of the brain-only group faced the same issue. Many participants also reported feeling disconnected from their work, a sign that cognitive ownership may decline when AI takes the wheel.
4. AI Use Isn’t One-Way
In a follow-up session, participants switched conditions. Those who had relied on ChatGPT early on continued to show low neural activity, even after switching to unaided writing. Meanwhile, those who started without AI and used it later showed stronger brain engagement and approached AI with more discernment.
Bottom line: the more external support, the less internal engagement. Across neural, linguistic, and scoring dimensions, the ChatGPT group lagged behind the others.
So where does this leave us?
When tasks require deep thinking, original synthesis, or long-term expertise, we should structure work to lead with human insight, not automate from the outset. Start with your own framing. Use AI for feedback, not the first draft. Pause to reflect before accepting a response. And especially for early-stage learners or critical thinkers, set deliberate boundaries on how and when AI enters the process.
That said, AI used with intention has real advantages. It’s a powerful force multiplier when the human is still steering. It helps brainstorm faster, analyze more broadly, and sharpen messaging. When layered onto expertise, rather than standing in for it, AI becomes a true creative partner.
So, back to the original question: Is AI making us dumber? That might be the wrong question. A better one might be: How do we ensure AI is a force multiplier—not a thinking substitute? Because in the end, it’s not about AI alone. It’s about how we use it and how we engage people in the process. AI isn’t making us dumber. It’s challenging us to be more deliberate. Used passively, it risks dulling cognitive engagement. But used with purpose, anchored in human insight, it becomes a powerful catalyst for better judgment, deeper learning, and smarter innovation.