AI Passwords Look Random But Crack Fast
🇬🇧#password-cracking#ai-randomness#security-flawsFreshcollected in 11m

AI Passwords Look Random But Crack Fast

PostLinkedIn
🇬🇧Read original on The Register - AI/ML

💡AI fails at secure passwords—vital warning for devs building auth systems

⚡ 30-Second TL;DR

What changed

AI passwords appear complex but follow predictable patterns

Why it matters

AI practitioners risk introducing vulnerabilities by relying on gen AI for passwords. Prompts reevaluation of AI in security workflows. Favors dedicated crypto tools over LLMs.

What to do next

Test your AI password generator against Hashcat; switch to libs like Python's secrets module for true randomness.

Who should care:Developers & AI Engineers

🧠 Deep Insight

Web-grounded analysis with 9 cited sources.

🔑 Key Takeaways

  • AI models including Claude, ChatGPT, and Gemini generate passwords based on learned patterns rather than true cryptographic randomness, making them statistically predictable despite appearing complex[1][2][3]
  • Research by cybersecurity firm Irregular found that Claude produced only 23 unique passwords out of 50 generated, with one specific pattern appearing 10 times, demonstrating severe repetition vulnerabilities[1]
  • Even older computers can crack AI-generated passwords in relatively short timeframes, contradicting online password strength checkers that rate them as extremely strong[3][4]
📊 Competitor Analysis▸ Show
Authentication MethodStrengthPredictabilityRecommended Use
AI-Generated PasswordsAppears StrongHighly PredictableNot Recommended
Dedicated Password Managers (Google Password Manager, Bitwarden, LastPass)Cryptographically StrongTruly RandomRecommended
Passkeys (Facial Recognition, Fingerprint)Very StrongNon-ApplicableRecommended Alternative
Human-Generated PasswordsVariableOften WeakNot Recommended
25+ Character Random PasswordsVery StrongTruly RandomRecommended

🛠️ Technical Deep Dive

• Large Language Models (LLMs) operate on pattern recognition and probability-based prediction learned from training data, fundamentally incompatible with cryptographic randomness requirements[2][6] • AI systems generate passwords based on statistical patterns in their training datasets rather than using cryptographic randomness functions[2][6] • Password strength checkers fail to detect the underlying predictability because they evaluate character complexity without understanding the pattern-based generation mechanism[4] • Cryptographically secure password generation requires tools like cryptographic random number generators or dedicated password managers that use entropy sources, not predictive models[6] • The vulnerability affects both direct user-generated passwords and embedded passwords in code written by AI coding agents[3][4] • Online password strength metrics (claiming millions of trillions of years to crack) are misleading when passwords follow discoverable patterns[4]

🔮 Future ImplicationsAI analysis grounded in cited sources

This research exposes a critical gap between AI capability and security requirements, likely to accelerate industry adoption of passkey authentication and hardware-based security methods. Organizations may face increased regulatory scrutiny regarding AI-assisted code generation in security-critical systems. The findings underscore the need for AI companies to implement safeguards preventing their models from being used for password generation, and may drive development of AI-resistant authentication standards. Developers relying on AI for code generation will need enhanced security auditing processes to identify and remediate AI-generated credentials in production systems. The broader implication is that certain security-critical functions should remain outside AI's domain, establishing precedent for human-controlled cryptographic operations.

⏳ Timeline

2026-02
Irregular cybersecurity firm releases research revealing AI password generation vulnerabilities in Claude, ChatGPT, and Gemini, verified by Sky News

📎 Sources (9)

Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.

  1. thenews.com.pk
  2. unilad.com
  3. news.sky.com
  4. ndtv.com
  5. thehackernews.com
  6. cedtechnology.co.uk
  7. blog.knowbe4.com
  8. securityweek.com
  9. techradar.com

Generative AI tools create seemingly complex passwords that are highly predictable. Experts reveal these can be cracked within hours despite appearances. This exposes flaws in AI's randomness generation for security.

Key Points

  • 1.AI passwords appear complex but follow predictable patterns
  • 2.Crackable within hours using standard attack methods
  • 3.Experts warn generative AI poor for strong password suggestions
  • 4.Reveals limitations in AI entropy and randomness

Impact Analysis

AI practitioners risk introducing vulnerabilities by relying on gen AI for passwords. Prompts reevaluation of AI in security workflows. Favors dedicated crypto tools over LLMs.

Technical Details

AI outputs lack true entropy, producing strings vulnerable to statistical analysis. Patterns emerge from training data biases. No specific models named, but applies broadly to gen AI.

📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Read Next

AI-curated news aggregator. All content rights belong to original publishers.
Original source: The Register - AI/ML