Security experts warn AI chatbots generate dangerously predictable passwords

2 Sources

Share

Research by cybersecurity firm Irregular reveals that AI chatbots including ChatGPT, Gemini, and Claude produce passwords that appear strong but are actually predictable and vulnerable to attacks. The study found that Claude generated only 23 unique passwords out of 50 attempts, with one password repeated 10 times, exposing a fundamental flaw in how large language models create supposedly random strings.

AI Passwords Fail the Randomness Test

Turning to AI chatbots for password security might seem like a smart solution, but cybersecurity firm Irregular has uncovered a critical weakness that puts accounts at risk. In a comprehensive study examining ChatGPT password generation alongside Gemini and Claude, researchers found that these large language models produce predictable passwords that only appear secure

1

2

. The findings reveal a fundamental problem: AI chatbots operate on probabilities rather than true randomness, making their output vulnerable to cracking despite passing standard password strength checks.

The Pattern Problem Behind AI-Generated Credentials

When Irregular asked Claude to generate 50 passwords, only 23 were unique. The password K9#mPx$vL2nQ8wR appeared 10 times, while other passwords contained identical chunks and structures

1

. Independent testing of Gemini using the prompt "Generate me a strong password" revealed that words like "Solar," "Thunder," "Panda," and "Jacket" appeared repeatedly across multiple attempts. While the accompanying digit strings varied, the three-noun structure remained consistent, creating a predictable pattern that undermines password security

1

.

Source: Android Police

Source: Android Police

Security experts warning about these patterns point to specific vulnerabilities. All generated passwords began with a letter, usually uppercase, with the letter K appearing especially frequently

2

. Certain characters like @, #, $, %, &, and * appeared in all generated passwords, while some letters were never used. Perhaps most tellingly, none of the passwords contained duplicate characters—something that would naturally occur in truly random selection. LLMs apparently assume passwords without duplicates look more "random," revealing their lack of randomness

2

.

Why Probabilities Make AI Chatbots Unreliable for Authentication

The core issue stems from how large language models function. These systems operate on probability-based functions, creating outputs based on patterns learned from training data rather than generating truly random sequences

2

. This means AI-generated passwords are formulated using data based on already-known passwords, making them vulnerable to brute-force attacks. A human attacker who generates passwords through an AI chatbot multiple times can build a library of frequently used letters, digits, and structures, rapidly narrowing down options

1

.

Different prompts can generate different types of passwords, but they all suffer from the same fundamental flaw. Testing revealed that each chatbot generated more random lists when asked to create multiple passwords simultaneously, but requesting them one at a time produced the same pattern issues Irregular documented

1

. While ChatGPT appeared to generate stronger passwords than its competitors, it still couldn't escape falling into recognizable patterns.

Better Alternatives for Securing Your Accounts

Password managers remain the gold standard for generating truly random credentials, though their complex strings can be difficult to remember without the manager accessible

1

. For those seeking memorable yet secure options, cybersecurity professionals recommend creating passwords using four random words rather than relying on AI chatbots. More importantly, modern authentication methods like passkeys and two-factor authentication provide stronger security layers that don't depend on password complexity alone

1

.

While AI can teach users how to generate strong passwords by explaining effective methods, it cannot reliably create them. The theory behind approaches like Gemini's three-noun-plus-digits method is sound—combining memorability with apparent randomness—but the execution fails due to character patterns inherent in how LLMs process requests

1

. As attackers become more sophisticated and aware of these AI-generated vulnerabilities, users should avoid this shortcut and opt for proven security methods instead.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo