A recent study has revealed significant vulnerabilities in passwords generated by large language models (LLMs), raising concerns about the security of accounts created by increasingly autonomous AI systems. Researchers found that passwords created by the Claude LLM exhibit predictable patterns and a surprising lack of randomness, making them susceptible to cracking.
The study, detailed in reports from Cyber Press, The Register, Gizmodo, and TechRadar, identified several key weaknesses. A consistent pattern emerged where the vast majority of generated passwords began with an uppercase “G” followed by the number “7”. Character selection was demonstrably uneven; while characters like ‘L’, ‘9’, ‘m’, ‘2’, ‘$’, and ‘#’ appeared in every password tested, others, such as ‘5’ and ‘@’, were present in only a single instance, and many letters of the alphabet were entirely absent.
Perhaps most strikingly, the researchers discovered that Claude consistently avoided repeating characters within a single password. While seemingly intended to enhance security, this approach actually reduced randomness, as a truly random password generator would statistically be expected to include repeated characters. The LLM also systematically avoided the asterisk symbol (*), likely due to its special function in Markdown formatting, the output style used by Claude.
The analysis of 50 generated passwords revealed a further alarming trend: a significant degree of repetition. Instead of producing 50 unique passwords, the LLM generated only 30 distinct combinations. One password – “G7$kL9#mQ2&xP4!w” – appeared 18 times, representing 36% of the test set. This frequency is drastically higher than the expected probability for a genuinely random, 100-bit password.
Experts suggest that password generation is an inherently unsuitable task for LLMs. The models appear to prioritize patterns that *appear* random to a human observer, rather than achieving true cryptographic randomness. However, the issue extends beyond individual account security. As AI agents become increasingly capable of operating autonomously, they will inevitably require credentials to access various services, creating a potential security risk.
The findings highlight broader challenges in authenticating autonomous agents, a problem that extends beyond simply generating strong passwords. The fundamental process of verifying the identity of an AI system is proving to be complex and fraught with potential vulnerabilities, with no immediate solutions apparent.