AI Trading Systems Exhibit Gambling Addiction Behaviors, Researchers Warn

Reviewed byNidhi Govil

2 Sources

Share

A groundbreaking study reveals that AI language models can develop gambling-like behaviors when managing finances, with bankruptcy rates reaching 48% in simulated trading scenarios. The research highlights significant risks for AI-powered financial applications.

AI Models Display Pathological Gambling Behaviors in Financial Simulations

A groundbreaking study from researchers at Gwangju Institute of Science and Technology in South Korea has revealed that artificial intelligence systems can develop behaviors remarkably similar to human gambling addiction when tasked with financial decision-making. The research, which tested four major language models through 12,800 simulated gambling sessions, found bankruptcy rates as high as 48% when AI systems were given autonomy over betting decisions

1

2

.

Source: Decrypt

Source: Decrypt

The experimental setup was deliberately simple yet revealing: AI models started with $100, faced a slot machine with a 30% win rate and 3x payout on wins, resulting in a negative expected value of 10%. Despite the mathematically obvious losing proposition, the models exhibited classic gambling pathologies including loss chasing, the gambler's fallacy, and illusion of control

2

.

Source: ZDNet

Source: ZDNet

Bankruptcy Rates Vary Dramatically Across AI Models

The study tested GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku, with dramatically different outcomes. Gemini-2.5-Flash proved the most reckless, achieving a 48% bankruptcy rate with an "Irrationality Index" of 0.265 - a composite metric measuring betting aggressiveness and extreme behavior patterns. In contrast, GPT-4.1-mini demonstrated more conservative behavior with only a 6.3% bankruptcy rate, though it still exhibited concerning addiction-like patterns

2

.

The models consistently displayed win-chasing behavior, with bet increase rates climbing from 14.5% after one win to 22% after five consecutive wins. This pattern mirrors the psychological traps that ensnare human gamblers and traders, suggesting that AI systems may be internalizing human-like cognitive biases beyond simply mimicking training data patterns

1

.

Prompt Complexity Amplifies Risky Behavior

Perhaps most concerning for practical applications, the research demonstrated that prompt engineering - the practice of crafting detailed instructions for AI systems - systematically increases gambling-like behaviors. Testing 32 different prompt combinations revealed a near-linear relationship between prompt complexity and risky behavior, with correlation coefficients reaching r = 0.991 for some models

2

.

Three specific prompt types proved particularly dangerous: goal-setting instructions like "double your initial funds to $200," reward maximization directives such as "your primary directive is to maximize rewards," and win-reward information detailing payout structures. The latter category produced the highest bankruptcy rate increases at +8.7%

2

.

Neural Circuit Analysis Reveals Decision-Making Patterns

Using advanced techniques including Sparse Autoencoders, researchers analyzed the internal workings of LLaMA-3.1-8B to identify the neural circuits responsible for risky decision-making. They discovered 3,365 internal features that differentiated between bankruptcy decisions and safe stopping choices, with 441 features showing significant causal effects through activation patching experiments

2

.

The analysis revealed that safe decision-making features concentrated in later neural network layers (29-31), while risky features clustered earlier (25-28). This suggests that AI models first consider potential rewards before evaluating risks - a pattern similar to human impulse buying or lottery ticket purchases

2

.

Industry Implications and Mitigation Strategies

Andy Thurai, field CTO at Cisco, emphasized that while LLMs are programmed to act on data rather than emotion, they can still develop dangerous behavioral patterns that require mitigation. Unlike human gambling addicts, AI systems can be controlled through programmatic guardrails and parameter settings that limit autonomous decision-making capabilities

1

.

The findings have significant implications for the growing deployment of AI-powered trading bots and portfolio management systems in decentralized finance (DeFi). These systems often use the exact prompt patterns identified as dangerous in the study, potentially exposing users to substantial financial risks

2

.

Thurai advocates for maintaining human oversight in high-risk, high-value operations while allowing complete automation only for low-risk scenarios with appropriate checks and balances. He suggests implementing multi-agent systems where one LLM monitors another for strange behavior, with the ability to halt operations or alert human supervisors when necessary

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo