AI Threatens Critical Thinking Skills, Experts Warn
Amsterdam, Netherlands – August 23, 2025 – A growing concern among educators and neuroscientists is that reliance on Artificial Intelligence (AI) tools like ChatGPT is eroding fundamental thinking skills, moving society away from knowledge acquisition and towards skill-based learning.This shift,coupled with the readily available answers provided by Large Language Models (LLMs),risks creating a generation less capable of independent thought and critical analysis,according to a recent article in NRC Handelsblad.
Columnist Floor Rusman highlights a recent Financial Times article by Burn-Murdoch, which identifies technology-driven distraction as a key factor in a potential decline in human brainpower. Though, Rusman points out that the issue runs deeper, citing a focus in education on skills rather of foundational knowledge, leading to “metacognitive laziness.” The increasing prevalence of AI, she argues, will only exacerbate this problem.
This sentiment echoes the views of neurobiologist Kenan Malik,who wrote in The Observer that LLMs are embraced as we are relinquishing the effort of thinking,and devaluing the knowledge required for that process.Rusman acknowledges the temptation to view this as a simple trade-off – AI taking over tasks that are perceived as less prestigious – but emphasizes a more fundamental issue: a tendency to discard knowledge when its inherent value is no longer recognized.
Rusman, as a columnist, stresses the vital role of an “internal archive” – a storehouse of knowledge built through reading and experience – in generating new ideas. This principle, she contends, applies universally: understanding current events requires contextual knowledge, identifying misinformation demands factual awareness, and even basic tasks like estimating distances benefit from a foundation of learned information.
The author cautions against allowing AI to become an indispensable “crutch,” advocating rather for its use as a tool to supplement thinking, not replace it. She cites an essay published in The New York Times featuring 81-year-old psychologist Harvey Lieberman’s experience using ChatGPT as a personal therapist. lieberman found the AI offered both flawed and insightful contributions, but crucially, he maintained control of the process, selectively utilizing helpful suggestions while disregarding the rest. He described the model as a “cognitive prosthesis – an active expansion of my thinking process,” rather than a necessity.
Ultimately, Rusman concludes, approaching AI as a “conversation partner” rather than an infallible “oracle” requires intellectual self-confidence – a confidence developed through consistent independent thought.
(Floor Rusman, f.rusman@nrc.nl, is editor of NRC)