ThroughLine develops new tool to combat violent extremism on OpenAI and Anthropic platforms

2 Sources

Share

ThroughLine, a New Zealand startup that provides crisis support for AI platforms including OpenAI, Anthropic, and Google, is developing a hybrid intervention chatbot to address violent extremism. The tool aims to direct users exhibiting extremist tendencies toward deradicalization support, expanding beyond the company's current focus on self-harm, domestic violence, and eating disorders.

ThroughLine Expands Crisis Support to Address Violent Extremism

ThroughLine, a New Zealand-based startup that has become the go-to crisis contractor for major AI platforms including OpenAI, Anthropic, and Google, is developing a new intervention chatbot designed to combat violent extremism . Founder and former youth worker Elliot Taylor announced the initiative, which represents a significant expansion of the company's current crisis support services that redirect users flagged for self-harm, domestic violence, or eating disorders to appropriate helplines.

The move addresses growing AI platform safety concerns in the wake of multiple lawsuits accusing AI companies of failing to prevent violence. OpenAI faced potential intervention from the Canadian government in February after a person who carried out a deadly school shooting had been banned by ChatGPT without authorities being informed

2

. This incident highlighted the urgent need for better intervention mechanisms on AI platforms where users exhibiting extremist tendencies increasingly share sensitive information.

Source: ET

Source: ET

Hybrid Tool Combines Chatbots and Human Support

The proposed chatbot rerouting tool would function as a hybrid model, combining an intervention chatbot specifically trained to respond to people showing signs of extremism with referrals to real-world mental health support services

1

. Taylor emphasized that the system won't rely on generic language models training data, stating "We're not using the training data of a base LLM. We're working with the correct experts." The technology is currently being tested, though no release date has been set.

ThroughLine operates from Taylor's rural New Zealand home, managing a constantly updated network of 1,600 helplines across 180 countries. Once AI platforms detect signs of potential crisis, they route users to ThroughLine, which matches them with available human-run services nearby. OpenAI confirmed its relationship with ThroughLine but declined further comment, while Anthropic and Google did not respond to requests for comment

2

.

Partnership with The Christchurch Call

ThroughLine is in discussions with The Christchurch Call, an anti-extremism initiative formed after New Zealand's worst terrorist attack in 2019, to develop deradicalization support capabilities

1

. The partnership would involve The Christchurch Call providing guidance while ThroughLine develops the technology for combating violent extremism. Galen Lamphere-Englund, a counterterrorism adviser representing the organization, expressed hope to deploy the product for gaming forum moderators and parents seeking to identify extremism online.

"It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said, though he added that no timeframe has been set

2

. The expansion reflects how mental health struggles disclosed on chatbots have exploded with AI popularity, now including what Taylor describes as "dalliances with extremism."

Balancing Safety with User Trust

Henry Fraser, an AI researcher at Queensland University of Technology, called the chatbot rerouting tool "a good and necessary idea because it recognizes that it's not just content that is the problem, but relationship dynamics"

1

. However, he noted success depends on "how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem."

Taylor acknowledged that follow-up features, including possible alerts to authorities about dangerous users, remain under consideration but must account for risks of triggering escalated behavior. He warned that overly aggressive content moderation could backfire, noting that people in distress often share things online they're too embarrassed to tell another person. A 2025 study by New York University's Stern Center for Business and Human Rights found that heightened moderation by platforms under law enforcement pressure has pushed sympathizers toward less regulated alternatives like Telegram

2

.

"If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support," Taylor explained

1

. This perspective underscores the delicate balance between online safety and maintaining trust with users who may need help most, as legal challenges against AI companies continue to mount over their handling of harmful content and user behavior.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo