AI-Generated Passwords Are Fundamentally Weak, Cybersecurity Experts Warn

Reviewed byNidhi Govil

4 Sources

Share

Large Language Models (LLMs) like ChatGPT, Claude, and Gemini produce passwords that appear strong but contain predictable patterns and hidden repetitions. Research by Irregular reveals these AI-generated passwords carry only 20-27 bits of entropy compared to the 98-120 bits expected for truly random passwords, making them vulnerable to brute-force attacks within hours.

Large Language Models Fail at Password Generation

AI-generated passwords are fundamentally weak despite appearing complex on the surface, according to research by cybersecurity firm Irregular shared with multiple outlets

4

. The study examined password outputs from ChatGPT, Claude, and Gemini, asking each to generate 16-character passwords with symbols, numbers, and mixed-case letters. While these passwords created by AI chatbots passed common online strength tests—with some checkers estimating centuries to crack them—closer analysis revealed a troubling reality

2

.

Source: Sky News

Source: Sky News

Hidden Repetitions Expose Structural Weaknesses

When Irregular analyzed 50 passwords generated by Claude's Opus 4.6 model in separate sessions, only 23 unique passwords emerged. One password—K9#mPx$vL2nQ8wR—appeared 10 times across the 50 attempts

4

. The vast majority started and ended with the same characters, and none contained repeating characters, indicating passwords are not truly random . When Sky News independently tested Claude, the first password it generated was K9#mPx@4vLp2Qn8R, confirming the predictable patterns in training data

4

. Similar patterns emerged with OpenAI's GPT-5.2 and Google's Gemini 3 Flash, particularly at the beginning of password strings .

Source: The Register

Source: The Register

Lower Entropy Makes Passwords Vulnerable to Brute-Force Attacks

Irregular calculated entropy using the Shannon entropy formula through two methods: character statistics and log probabilities. The 16-character AI-generated passwords showed entropy of approximately 27 bits and 20 bits respectively . For genuinely random passwords, character statistics expect 98 bits of entropy, while log probability methods expect 120 bits

2

. This massive gap means that even old computers can crack them in a relatively short amount of time—potentially within hours, according to Irregular co-founder Dan Lahav

4

. Online password checkers evaluate surface complexity, not the hidden statistical patterns behind a string, which is why they may classify these predictable outputs as secure

2

.

Why LLMs Produce Predictable Patterns

The fundamental issue stems from how Large Language Models (LLMs) function. These systems are trained to predict the next token or data point that should appear in a sequence, choosing characters that make the most sense based on their training data

3

. This approach is the opposite of randomness. LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation . As Irregular stated, this weakness is unfixable by prompting or temperature adjustments . Notably, even Gemini 3 Pro acknowledged this limitation, returning passwords with a security warning that they should not be used for sensitive accounts and recommending password managers like 1Password or Bitwarden instead .

Source: Lifehacker

Source: Lifehacker

AI Passwords Already Circulating in Public Code

The implications extend beyond individual users. Developers increasingly rely on using AI for password generation in their code, and AI-generated passwords are already appearing in GitHub repositories and documentation . Searching for common character sequences like K9#mP yielded 113 results on GitHub, while k9#vL returned 14 results

4

. While most appear in test code and setup instructions, some were found in what Irregular suspected were real servers or services

4

. This concern grows more urgent as Anthropic CEO Dario Amodei predicted that AI will likely write the majority of all code in the future .

What Users Should Do Instead

Cybersecurity experts universally recommend against using AI for password generation. "You should definitely not do that," Lahav told Sky News. "And if you've done that, you should change your password immediately"

4

. Password managers offer a safer alternative, as they use cryptographic randomness rather than token prediction

2

. Users can also create secure passwords manually by selecting two or three uncommon words and mixing characters

3

. Authentication methods like passkeys, which use face and fingerprint ID, provide even stronger security

4

. Graeme Stewart from Check Point noted this sits in the "avoidable, high-impact when it goes wrong" category of cybersecurity vulnerabilities

4

. Irregular warns that the gap between capability and behavior likely won't be unique to passwords as AI-assisted development continues to accelerate .

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo