For years, Dr. Ron Li has studied how humans learn language. Now, he’s grappling with a new question: what happens when language learns to mimic humanity without possessing it? The proliferation of large language models (LLMs) powering chatbots has created a novel situation – fluent speech decoupled from accountability, a phenomenon Li, a physician and researcher at Stanford University, describes as a shift in the “moral structure of language.”
Millions are already integrating these synthetic interlocutors into their lives, relying on them for information, advice, and even companionship. But unlike human communication, an LLM’s words carry no personal risk. When corrected, a chatbot apologizes and adjusts its response, repeating the cycle endlessly, offering the appearance of responsibility without any underlying belief or consequence. This dynamic, Li argues, erodes the foundations of trust and meaning in human interaction.
“What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any,” Li explained in a recent essay exploring the implications of LLMs. “The words sound responsible, yet they are empty.”
The core issue, according to experts, isn’t simply about factual errors – though those are frequent. It’s about the erosion of the implicit social contract inherent in human speech. When language can be generated at scale without an accountable speaker, the expectations listeners place on communication begin to diminish. Promises lose their weight, apologies become performative, and advice lacks genuine liability.
This isn’t a new phenomenon in the sense that deception and manipulation have always existed. What’s different, researchers say, is the *routine* production of speech that mimics intention and commitment without a corresponding agent to be held accountable. What we have is arriving faster than our capacity to understand it, outpacing the norms that govern meaningful speech, according to Li.
Andrej Karpathy, an AI researcher, has described LLMs as “human ghosts” – software that can be endlessly copied and modified without a fixed identity. The traditional mechanisms for holding speakers accountable – social sanction, legal penalty, reputational damage – require a continuous agent whose future can be negatively impacted by their words. LLMs, by their nature, lack this vulnerability.
The problem extends beyond technical limitations. As LLMs become more sophisticated and are integrated into more aspects of daily life, the lines between human and machine communication blur. A presenter might use a chatbot to generate slides without fully vetting the content, an instructor might deliver AI-generated feedback to students, or an employee might rely on AI to produce operate they would normally author themselves. In each case, the potential for diminished responsibility and a loss of personal investment exists.
The psychological impact was foreshadowed decades ago with the creation of ELIZA, the first chatbot, built by Joseph Weizenbaum at MIT in 1966. Despite its rudimentary programming, ELIZA prompted users to project understanding and accountability onto the machine, a phenomenon Weizenbaum found deeply unsettling. Today’s LLMs, with their vastly superior linguistic competence, amplify this effect.
According to a recent report by ZDNET, eight free AI chatbots were tested to determine the best tools available in 2026, highlighting the rapid advancement and increasing accessibility of this technology. The report underscores the growing reliance on these systems and the need to understand their implications.
The philosopher J.L. Austin argued that language isn’t merely about transmitting information; it’s about *doing* something. Every utterance performs an act – asserting a belief, making a claim, issuing a request. LLMs, although, excel at performing these speech acts without any genuine commitment or accountability. This creates a moral failure, not a procedural one.
Norbert Wiener, a mathematician and pioneer of cybernetics, warned in 1950 about the dangers of surrendering responsibility to machines. He foresaw that increasing machine capability would lead humans to abdicate decision-making in the pursuit of efficiency, and that this pursuit of efficiency itself would erode human dignity. He cautioned that surrendering responsibility to machines would result in consequences “seated on the whirlwind.”
The challenge, experts say, isn’t to abandon these tools, but to develop structures that re-anchor responsibility. This could involve constraints on the use of AI in sensitive contexts, preserving authorship and traceability, and establishing clear lines of liability. The question remains whether society can adapt quickly enough to address the ethical and moral implications of a world where speech no longer requires a speaker.