At a glance: New research shows that passwords generated by AI systems are often predictable and repeat across sessions due to the statistical token-based nature of LLMs. Because these passwords appear complex, they frequently pass standard strength checks and may be embedded into code or configuration files without detection. Organizations should instead rely on cryptographically secure random number generators for password creation tasks to reduce the risk of credential compromise.
Threat summary
On February 18, 2025, researchers released an analysis showing that passwords generated by artificial intelligence systems follow predictable patterns and often repeat across sessions.
Their findings showed that large language models (LLMs) have difficulty producing cryptographically secure randomness, even when prompted to generate complex credentials. The research also demonstrated that these tools frequently embed weak passwords into configuration files, container setups, and initialization scripts. Because the resulting strings appear complex, they pass standard strength checks and are rarely flagged during reviews.
LLMs generate text by predicting the next piece of text, called a token, based on patterns learned from the data they were trained on. They do not use a cryptographically secure random number generator, which is the mechanism required to create unpredictable passwords.
Instead, the model selects each character or character group by choosing the most likely next token, according to its internal probability estimates. This process is designed to produce coherent language, not randomness.
As a result, the output reflects learned patterns rather than uniform random values. When generating large batches of passwords, identical strings appeared across multiple runs, including at different temperature settings intended to increase variation.
Entropy measurements confirmed that these passwords had far fewer possible combinations than those produced by secure random sources.
Analysis
This work shows that artificial intelligence systems rely on statistical token prediction, which leads to passwords that repeat and exhibit low entropy. These characteristics make the passwords predictable enough for enumeration. The reduced variability lowers the cost of brute-force attempts and increases the likelihood that a threat actor can guess or derive the password using targeted dictionaries built from model-generated samples.
The researchers emphasized that this is not a flaw in a specific model, but rather a consequence of how all language models generate text, making the issue consistent across vendors and platforms.
Password-strength meters and policy engines evaluate length and character variety but do not detect nonuniform randomness. As a result, language-model-generated passwords meet complexity requirements while remaining predictable. Automated agents may also insert these passwords into code or configuration files without review, bypassing human oversight and standard credential-management processes.
As artificial intelligence adoption expands, predictable credentials can propagate across environments, creating systemic exposure that is difficult to detect through traditional controls.
Organizations can reduce this risk by relying on cryptographically secure random number generators for all password creation tasks. Development and automation pipelines can be configured to call established tools such as OpenSSL, system entropy sources, or enterprise password-management APIs. Internal development guidelines can direct teams to approved secure-generation mechanisms to limit the introduction of predictable credentials into production environments and reduce exposure across identity and access management systems.
Field Effect MDR provides additional protection by monitoring authentication flows, endpoint behavior, cloud access, and lateral movement. This visibility allows the solution to surface indicators of credential misuse such as rapid-success login attempts, account access from infrastructure not previously associated with the user, and privilege escalation that relies on valid credentials rather than malware.