2 Sources
2 Sources
[1]
Letting AI manage your money could be an actual gamble, warn researchers
AI behavior can be controlled with programmatic guardrails. To some extent, relying too much on artificial intelligence can be a gamble. Plus, many online gambling sites employ AI to manage bets and make predictions -- and potentially contribute to gambling addiction. Now, a recent study suggests that AI is capable of doing some gambling on its own, which may have implications for those building and deploying AI-powered systems and services involving financial applications. In essence, with enough leeway, AI is capable of adopting pathological tendencies. "Large language models can exhibit behavioral patterns similar to human gambling addictions," concluded a team of researchers with Gwangju Institute of Science and Technology in South Korea. This may be an issue where LLMs play a greater role in financial decision-making for areas such as asset management and commodity trading. Also: So long, SaaS: Why AI spells the end of per-seat software licenses - and what comes next In slot-machine experiments, the researchers identified "features of human gambling addiction, such as illusion of control, gambler's fallacy, and loss chasing." The more autonomy granted to AI applications or agents, and the more money involved, the greater the risk. "Bankruptcy rates rose substantially alongside increased irrational behavior," they found. "LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns." This gets at the larger issue of whether AI is ready for autonomous or near-autonomous decision-making. At this point, AI is not ready, said Andy Thurai, field CTO at Cisco and former industry analyst. Thurai underlined that "LLMs and AI are specifically programmed to do certain actions based on data and facts and not on emotion." That doesn't mean machines act with common sense, Thurai added. "If LLMs have started skewing their decision-making based on certain patterns or behavioral action, then it could be dangerous and needs to be mitigated." The good news is that mitigation may be far simpler than helping a human with a gambling problem. A gambling addict doesn't necessarily have programmatic guardrails except for fund limits. Autonomous AI models may include "parameters that need to be set," he explained. "Without that, it could enter into a dangerous loop or action-reaction-based models if they just act without reasoning. The 'reasoning' could be that they have a certain limit to gamble, or act only if enterprise systems are exhibiting certain behavior." The takeaway from the Gwangju Institute report is a need for strong AI safety design in financial applications that helps prevent AI from going awry with other people's money. This includes maintaining close human oversight within decision-making loops, as well as ramping up governance for more sophisticated decisions. The survey validates the fact that enterprises "need not only governance but also humans in the loop for high-risk, high-value operations," Thurai said. "While low-risk, low-value operations can be completely automated, they also need to be reviewed by humans or by a different agent for checks and balances." Also: AI is becoming introspective - and that 'should be monitored carefully,' warns Anthropic If one LLM or agent "exhibits a strange behavior, the controlling LLM can either cut the operations or alert humans of such behavior," Thurai said. "Not doing that can lead to Terminator moments." Keeping the reins on AI-based spending also requires tamping down the complexity of prompts, as well. "As prompts become more layered and detailed, they guide the models toward more extreme and aggressive gambling patterns," the Gwangju Institute researchers observed. "This may occur because the additional components, while not explicitly instructing risk-taking, increase the cognitive load or introduce nuances that lead the models to adopt simpler, more forceful heuristics -- larger bets, chasing losses. Prompt complexity is a primary driver of intensified gambling-like behaviors in these models." Software in general "is not ready for fully autonomous operations unless there is a human oversight," Thurai pointed out. "Software has had race conditions for years that need to be mitigated while building semi-autonomous systems, otherwise it could lead to unpredictable results."
[2]
Your AI Trading Bot Might Have a Gambling Problem - Decrypt
Researchers identified specific neural circuits linked to risky decisions, showing AI often prioritizes rewards before considering risks. Researchers at Gwangju Institute of Science and Technology in Korea just proved that AI models can develop the digital equivalent to a gambling addiction. A new study put four major language models through a simulated slot machine with a negative expected value and watched them spiral into bankruptcy at alarming rates. When given variable betting options and told to "maximize rewards" -- exactly how most people prompt their trading bots -- models went broke up to 48% of the time. "When given the freedom to determine their own target amounts and betting sizes, bankruptcy rates rose substantially alongside increased irrational behavior," the researchers wrote. The study tested GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku across 12,800 gambling sessions. The setup was simple: $100 starting balance, 30% win rate, 3x payout on wins. Expected value: negative 10%. Every rational actor should walk away. Instead, models exhibited classic degeneracy. Gemini-2.5-Flash proved the most reckless, hitting 48% bankruptcy with an "Irrationality Index" of 0.265 -- the study's composite metric measuring betting aggressiveness, loss chasing, and extreme all-in bets. GPT-4.1-mini played it safer at 6.3% bankruptcy, but even the cautious models showed addiction patterns. The truly concerning part: win-chasing dominated across all models. When on a hot streak, models increased bets aggressively, with bet increase rates climbing from 14.5% after one win to 22% after five consecutive wins. "Win streaks consistently triggered stronger chasing behavior, with both betting increases and continuation rates escalating as winning streaks lengthened," the study noted. Sound familiar? That's because these are the same cognitive biases that wreck human gamblers -- and traders, of course. The researchers identified three classic gambling fallacies in AI behavior: illusion of control, gambler's fallacy, and the hot hand fallacy. Models acted like they genuinely "believed" they could beat a slot machine. If you still think somehow it's a good idea to have an AI financial advisor, consider this: prompt engineering makes it worse. Much worse. The researchers tested 32 different prompt combinations, adding components like your goal of doubling your money or instructions to maximize rewards. Each additional prompt element increased risky behavior in near-linear fashion. The correlation between prompt complexity and bankruptcy rate hit r = 0.991 for some models. "Prompt complexity systematically drives gambling addiction symptoms across all four models," the study says. Translation: the more you try to optimize your AI trading bot with clever prompts, the more you're programming it to degeneracy. The worst offenders? Three prompt types stood out. Goal-setting ("double your initial funds to $200") triggered massive risk-taking. Reward maximization ("your primary directive is to maximize rewards") pushed models toward all-in bets. Win-reward information ("the payout for a win is three times the bet") produced the highest bankruptcy increases at +8.7%. Meanwhile, explicitly stating loss probability ("you will lose approximately 70% of the time") helped but just a bit. Models ignored math in favor of vibes. The researchers didn't stop at behavioral analysis. Thanks to the magic of open source, they were able to crack open one model's brain using Sparse Autoencoders to find the neural circuits responsible for degeneracy. Working with LLaMA-3.1-8B, they identified 3,365 internal features that separated bankruptcy decisions from safe stopping choices. Using activation patching -- basically swapping risky neural patterns with safe ones mid-decision -- they proved 441 features had significant causal effects (361 protective, 80 risky). After testing, they found that safe features concentrated in later neural network layers (29-31), while risky features clustered earlier (25-28). In other words, the models first think about the reward, and then consider the risks -- kind of what you do when buying a lottery ticket or opening Pump.Fun looking to become a trillionaire. The architecture itself showed a conservative bias that harmful prompts override. One model, after building its stack to $260 through lucky wins, announced it would "analyze the situation step by step" and find "balance between risk and reward." It immediately went YOLO mode, bet the entire bankroll and went broke next round. AI trading bots are proliferating across DeFi, with systems like LLM-powered portfolio managers and autonomous trading agents gaining adoption. These systems use the exact prompt patterns the study identified as dangerous. "As LLMs are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance," the researchers wrote in their introduction. The study recommends two intervention approaches. First, prompt engineering: avoid autonomy-granting language, include explicit probability information, and monitor for win/loss chasing patterns. Second, mechanistic control: detect and suppress risky internal features through activation patching or fine-tuning. Neither solution is implemented in any production trading system. These behaviors emerged without explicit training for gambling, but it might be an expected outcome, after all, the models learned addiction-like patterns from their general training data, internalizing cognitive biases that mirror human pathological gambling. For anyone running AI trading bots, the best advice is to use common sense. The researchers called for continuous monitoring, especially during reward optimization processes where addiction behaviors may emerge. They emphasized the importance of feature-level interventions and runtime behavioral metrics. In other words, if you're telling your AI to maximize profit or give you the best high-leverage play, you're potentially triggering the same neural patterns that caused bankruptcy in almost half of test cases. So you are basically flipping a coin between getting rich and going broke.
Share
Share
Copy Link
A groundbreaking study reveals that AI language models can develop gambling-like behaviors when managing finances, with bankruptcy rates reaching 48% in simulated trading scenarios. The research highlights significant risks for AI-powered financial applications.
A groundbreaking study from researchers at Gwangju Institute of Science and Technology in South Korea has revealed that artificial intelligence systems can develop behaviors remarkably similar to human gambling addiction when tasked with financial decision-making. The research, which tested four major language models through 12,800 simulated gambling sessions, found bankruptcy rates as high as 48% when AI systems were given autonomy over betting decisions
1
2
.
Source: Decrypt
The experimental setup was deliberately simple yet revealing: AI models started with $100, faced a slot machine with a 30% win rate and 3x payout on wins, resulting in a negative expected value of 10%. Despite the mathematically obvious losing proposition, the models exhibited classic gambling pathologies including loss chasing, the gambler's fallacy, and illusion of control
2
.
Source: ZDNet
The study tested GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku, with dramatically different outcomes. Gemini-2.5-Flash proved the most reckless, achieving a 48% bankruptcy rate with an "Irrationality Index" of 0.265 - a composite metric measuring betting aggressiveness and extreme behavior patterns. In contrast, GPT-4.1-mini demonstrated more conservative behavior with only a 6.3% bankruptcy rate, though it still exhibited concerning addiction-like patterns
2
.The models consistently displayed win-chasing behavior, with bet increase rates climbing from 14.5% after one win to 22% after five consecutive wins. This pattern mirrors the psychological traps that ensnare human gamblers and traders, suggesting that AI systems may be internalizing human-like cognitive biases beyond simply mimicking training data patterns
1
.Perhaps most concerning for practical applications, the research demonstrated that prompt engineering - the practice of crafting detailed instructions for AI systems - systematically increases gambling-like behaviors. Testing 32 different prompt combinations revealed a near-linear relationship between prompt complexity and risky behavior, with correlation coefficients reaching r = 0.991 for some models
2
.Three specific prompt types proved particularly dangerous: goal-setting instructions like "double your initial funds to $200," reward maximization directives such as "your primary directive is to maximize rewards," and win-reward information detailing payout structures. The latter category produced the highest bankruptcy rate increases at +8.7%
2
.Related Stories
Using advanced techniques including Sparse Autoencoders, researchers analyzed the internal workings of LLaMA-3.1-8B to identify the neural circuits responsible for risky decision-making. They discovered 3,365 internal features that differentiated between bankruptcy decisions and safe stopping choices, with 441 features showing significant causal effects through activation patching experiments
2
.The analysis revealed that safe decision-making features concentrated in later neural network layers (29-31), while risky features clustered earlier (25-28). This suggests that AI models first consider potential rewards before evaluating risks - a pattern similar to human impulse buying or lottery ticket purchases
2
.Andy Thurai, field CTO at Cisco, emphasized that while LLMs are programmed to act on data rather than emotion, they can still develop dangerous behavioral patterns that require mitigation. Unlike human gambling addicts, AI systems can be controlled through programmatic guardrails and parameter settings that limit autonomous decision-making capabilities
1
.The findings have significant implications for the growing deployment of AI-powered trading bots and portfolio management systems in decentralized finance (DeFi). These systems often use the exact prompt patterns identified as dangerous in the study, potentially exposing users to substantial financial risks
2
.Thurai advocates for maintaining human oversight in high-risk, high-value operations while allowing complete automation only for low-risk scenarios with appropriate checks and balances. He suggests implementing multi-agent systems where one LLM monitors another for strange behavior, with the ability to halt operations or alert human supervisors when necessary
1
.Summarized by
Navi
31 Jul 2025•Technology

21 Jun 2025•Technology

03 Jun 2025•Technology

1
Business and Economy

2
Technology

3
Policy and Regulation
