Artificial intelligence systems, including those powering popular chatbots, generate passwords that appear complex but lack the true randomness required for robust security, according to a new study by the security firm Irregular. The findings raise concerns about the reliance on these tools for credential creation, particularly as AI integration expands into more sensitive applications.
Researchers at Irregular tasked Claude, ChatGPT, and Gemini with generating 16-character passwords incorporating symbols, numbers, and mixed-case letters. While initial assessments using standard online password strength checkers indicated high security – with some estimates suggesting centuries to crack – a deeper analysis revealed significant vulnerabilities. The study found a surprising degree of repetition among passwords generated in separate sessions, and a consistent structural pattern across outputs.
Notably, the AI-generated passwords consistently avoided repeating characters, a characteristic that, counterintuitively, signals a lack of genuine randomness. “This absence of repetition may seem reassuring, yet it actually signals that the output follows learned conventions rather than true randomness,” Irregular stated in its report. The firm’s entropy calculations, measuring character statistics and model log probabilities, revealed that these passwords possessed only 20 to 27 bits of entropy. A truly random 16-character password typically achieves between 98 and 120 bits of entropy, making the AI-generated options significantly more susceptible to brute-force attacks.
The vulnerability stems from the fundamental way large language models (LLMs) operate. LLMs are designed to produce plausible and repeatable text, prioritizing predictability over unpredictability. Online password strength meters, which assess surface complexity, fail to detect these underlying statistical patterns, potentially misclassifying weak passwords as secure. Attackers aware of these patterns could dramatically reduce the search space required for successful cracking.
The study as well highlighted the presence of similar sequences in publicly available code repositories and documentation, suggesting that AI-generated passwords may already be widely circulating. This poses a particular risk for developers who might inadvertently utilize these compromised credentials during testing or deployment. Even the AI systems themselves acknowledge the risk. Gemini 3 Pro, for example, provides password suggestions accompanied by a warning against using chat-generated credentials for sensitive accounts, recommending passphrases and dedicated password managers instead.
Irregular concluded that relying on LLMs for password generation is fundamentally flawed and cannot be remedied through prompting or adjustments to model parameters. “People and coding agents should not rely on LLMs to generate passwords,” the firm stated. “Passwords generated through direct LLM output are fundamentally weak, and this is unfixable.”
The findings reach as Apple prepares to integrate ChatGPT, Claude, and Gemini into CarPlay via iOS 26.4, according to reports from Lifehacker and MacRumors. This expanded accessibility of AI tools further underscores the need for caution regarding their use in security-sensitive contexts.