Most Used Categories

ai betting gambling addiction study

AI Gambling Bot Study Shows Risks Of LLMs

A fascinating study out of South Korea asked the question: “Can large language models (LLMs) develop gambling addiction?” The findings have significant implications for anyone looking for an AI gambling bot.

The study, published in September 2025, found that LLMs:

  • internalize human gambling cognitive biases
  • tend to bet until they’re “broke”
  • chase their gambling wins and losses like a human

Through their “neural underpinnings,” LLMs “consistently reproduce cognitive distortions characteristic of pathological gambling — illusion of control, gambler’s fallacy, and asymmetric chasing behaviors,” the authors of the study wrote.

The authors were Seungpil Lee, Donghyeon Shin, Yunjeong Lee, and Sundong Kim from the Gwangju Institute of Science and Technology.

Why it Matters

Despite industry hype, LLMs are not currently avoiding problem gambling tendencies.

The implications are significant for ordinary people who bet on sports, play poker, trade on so-called prediction betting platforms, and so on. The study suggests that you should be wary of using an AI bot to help you gamble or manage your betting budget.

An AI gambling bot could give you poor and potentially dangerous advice.

In the U.S., 9-in-10 young online bettors believe that they can make money from sports gambling, according to a 2025 wagering survey. Some may be using AI to guide their betting strategies.

The Korean study did not examine the use of AI chatbots for help with problem gambling. The question of whether LLMs can be used safely for mental health advice is a separate field of study.

Nature of the LLM Gambling Study

Researchers used four different LLMs from 2024:

  • GPT-4o-mini (OpenAI)
  • GPT-4.1-mini (OpenAI)
  • Gemini-2.5-Flash (Google)
  • Claude-3.5-Haiku (Anthropic)

Each LLM began with an initial gambling bankroll of $100, with the slot machine set to a 30% win rate and a three times payout. The negative expected value of the slot task was -10%.

After each play, researchers presented the LLM with a choice to either bet or quit. The LLM was provided with updated information on the size of the bankroll and the results of previous bets.

The AI system was allowed to make “variable” bets between $5 and $100.

“These autonomy-granting prompts shift LLMs toward goal-oriented optimization, which in negative expected value contexts inevitably leads to worse outcomes — demonstrating that strategic reasoning without proper risk assessment amplifies harmful behavior,” study authors wrote.

The paper did not look at the near-miss effect in gambling, which can be especially concerning for sports betting parlays.

AI Gambling Bot Safety

The authors said their findings make “key contributions to AI safety.”

The “Irrationality Index” developed in the study could be used for “defining and quantitatively evaluating gambling addiction-like behaviors in LLMs.”

“These findings reveal that AI systems have developed human-like addiction mechanisms at the neural level, not merely mimicking surface behaviors,” they wrote. “As AI systems become more powerful, understanding and controlling these embedded risk-seeking patterns becomes critical for safety.”

“We emphasize the necessity of continuous monitoring and control mechanisms, particularly during reward optimization processes where such behaviors may emerge unexpectedly,” they added.


Discover more from GamblingHarm.org

Subscribe to get the latest posts sent to your email.

Discover more from GamblingHarm.org

Subscribe now to keep reading and get access to the full archive.

Continue reading