2 Sources
2 Sources
[1]
Crisis contractor for OpenAI, Anthropic eyes a move to combat extremism
ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI as well as rivals Anthropic and Google, to redirect users to crisis support when they are flagged as being at risk of self-harm, domestic violence or an eating disorder, is also exploring ways to broaden its offer to include preventing violent extremism, its founder and former youth worker Elliot Taylor said. People who show violent extremist tendencies on ChatGPT and other artificial intelligence platforms could be directed in the future to human and chatbot-based deradicalisation support through a new tool in development in New Zealand, the people behind it said. The initiative is the latest attempt to address safety concerns in the face of a growing number of lawsuits accusing AI companies of failing to stop, and even enabling, violence. OpenAI was threatened with intervention by the Canadian government in February after revealing a person who carried out a â deadly school â shooting had been banned by the platform without the authorities being informed. ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI as well as rivals Anthropic and Google, to redirect users to crisis support when they are flagged as being at risk of self-harm, domestic violence or an eating disorder, is also exploring ways to broaden its offer to include preventing violent extremism, its founder and former youth worker Elliot Taylor said. The company is in discussions with The Christchurch Call, an initiative to stamp out online hate formed after New Zealand's worst terrorist attack in 2019, which would involve the anti-extremism group giving guidance while ThroughLine develops the intervention chatbot, the former youth worker said. "It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," â Taylor said in an interview, adding that no timeframe had been set. OpenAI confirmed the relationship with ThroughLine but declined to comment further. Anthropic and Google did not immediately respond to requests for comment. Taylor's firm, which he runs from his home in rural New Zealand, has become a go-to for â AI firms with its offer of a constantly checked network of 1,600 helplines in 180 countries. Once the AI detects signs of a potential mental health crisis, it routes the user to ThroughLine, which matches them with an available human-run service nearby. But ThroughLine's scope has been limited to specific categories, the founder said. The breadth of mental health struggles that people disclose online has exploded with the popularity of AI chatbots, and now includes dalliances with extremism, he added. More chatbot, more problems The anti-extremism tool would probably be a hybrid model combining a chatbot trained to respond to people who show signs of extremism and referrals to real-world mental health services, Taylor said. "We're not using the training data of a base LLM," he said, referring to the generic datasets large language model platforms use to form coherent text. "We're working with the correct experts." The technology is currently being tested, but no date has been set for release. Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped to roll the product out for moderators of gaming forums and for parents and caregivers who want to weed out extremism online. A chatbot rerouting tool was "a good and necessary idea because it recognises that it's not just content that â is the problem, but relationship dynamics," said Henry Fraser, an AI researcher at Queensland University of Technology. The product's success may depend on questions of "how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem," he said. Taylor said follow-up features, including possible alerts to authorities about dangerous users, were still to be determined but would take into account any risk of triggering escalated behaviour. He said people in distress tended to share things online that they were too embarrassed to say to a person, and governments risked compounding danger if they pressured platforms to cut off users who engaged in sensitive conversations. Heightened moderation associated with militancy by platforms under pressure from law enforcement has seen sympathisers moving to less regulated alternatives like Telegram, according to a 2025 study by New York University's Stern Center for Business and Human Rights. "If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support," Taylor said.
[2]
Crisis contractor for OpenAI, Anthropic eyes a move to combat extremism
ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI, is exploring ways to broaden its offer to include preventing violent extremism. People who show violent extremist tendencies on ChatGPT and other artificial intelligence platforms could be directed in the future to human and chatbotâbased deradicalization support through a new tool in development in New Zealand, the people behind it said. The initiative is the latest attempt to address safety concerns in the face of a growing number of lawsuits accusing AI companies of failing to stop, and even enabling, violence. OpenAI was threatened with intervention by the Canadian government in February after revealing a person who carried out a deadly school shooting had been banned by the platform without the authorities being informed. ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI as well as rivals Anthropic and Google, to redirect users to crisis support when they are flagged as being at risk of self-harm, domestic violence, or an eating disorder, is also exploring ways to broaden its offer to include preventing violent extremism, its founder and former youth worker Elliot Taylor said. The company is in discussions with The Christchurch Call, an initiative to stamp out online hate formed after New Zealand's worst terrorist attack in 2019, which would involve the anti-extremism group giving guidance while ThroughLine develops the intervention chatbot, the former youth worker said. "It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said in an interview, adding that no timeframe had been set. OpenAI confirmed the relationship with ThroughLine but declined to comment further. Anthropic and Google did not immediately respond to requests for comment. Taylor's firm, which he runs from his home in rural New Zealand, has become a go-to for AI firms with its offer of a constantly checked network of 1,600 helplines in 180 countries. Once the AI detects signs of a potential mental health crisis, it routes the user to ThroughLine, which matches them with an available human-run service nearby. But ThroughLine's scope has been limited to specific categories, the founder said. The breadth of mental health struggles that people disclose online has exploded with the popularity of AI chatbots, and now includes dalliances with extremism, he added. More chatbots, more problems The anti-extremism tool would probably be a hybrid model combining a chatbot trained to respond to people who show signs of extremism and referrals to real-world mental health services, Taylor said. "We're not using the training data of a base LLM," he said, referring to the generic datasets large language model platforms use to form coherent text. "We're working with the correct experts." The technology is currently being tested, but no date has been set for release. Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped to roll the product out for moderators of gaming forums and for parents and caregivers who want to weed out extremism online. A chatbot rerouting tool was "a good and necessary idea because it recognizes that it's not just content that is the problem, but relationship dynamics," said Henry Fraser, an AI researcher at Queensland University of Technology. The product's success may depend on questions of "how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem," he said. Taylor said follow-up features, including possible alerts to authorities about dangerous users, were still to be determined but would take into account any risk of triggering escalated behavior. He said people in distress tended to share things online that they were too embarrassed to say to a person, and governments risked compounding danger if they pressured platforms to cut off users who engaged in sensitive conversations. Heightened moderation associated with militancy by platforms under pressure from law enforcement has seen sympathizers moving to less regulated alternatives like Telegram, according to a 2025 study by New York University's Stern Center for Business and Human Rights. "If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support," Taylor said.
Share
Share
Copy Link
ThroughLine, a New Zealand startup that provides crisis support for AI platforms including OpenAI, Anthropic, and Google, is developing a hybrid intervention chatbot to address violent extremism. The tool aims to direct users exhibiting extremist tendencies toward deradicalization support, expanding beyond the company's current focus on self-harm, domestic violence, and eating disorders.
ThroughLine, a New Zealand-based startup that has become the go-to crisis contractor for major AI platforms including OpenAI, Anthropic, and Google, is developing a new intervention chatbot designed to combat violent extremism . Founder and former youth worker Elliot Taylor announced the initiative, which represents a significant expansion of the company's current crisis support services that redirect users flagged for self-harm, domestic violence, or eating disorders to appropriate helplines.
The move addresses growing AI platform safety concerns in the wake of multiple lawsuits accusing AI companies of failing to prevent violence. OpenAI faced potential intervention from the Canadian government in February after a person who carried out a deadly school shooting had been banned by ChatGPT without authorities being informed
2
. This incident highlighted the urgent need for better intervention mechanisms on AI platforms where users exhibiting extremist tendencies increasingly share sensitive information.
Source: ET
The proposed chatbot rerouting tool would function as a hybrid model, combining an intervention chatbot specifically trained to respond to people showing signs of extremism with referrals to real-world mental health support services
1
. Taylor emphasized that the system won't rely on generic language models training data, stating "We're not using the training data of a base LLM. We're working with the correct experts." The technology is currently being tested, though no release date has been set.ThroughLine operates from Taylor's rural New Zealand home, managing a constantly updated network of 1,600 helplines across 180 countries. Once AI platforms detect signs of potential crisis, they route users to ThroughLine, which matches them with available human-run services nearby. OpenAI confirmed its relationship with ThroughLine but declined further comment, while Anthropic and Google did not respond to requests for comment
2
.ThroughLine is in discussions with The Christchurch Call, an anti-extremism initiative formed after New Zealand's worst terrorist attack in 2019, to develop deradicalization support capabilities
1
. The partnership would involve The Christchurch Call providing guidance while ThroughLine develops the technology for combating violent extremism. Galen Lamphere-Englund, a counterterrorism adviser representing the organization, expressed hope to deploy the product for gaming forum moderators and parents seeking to identify extremism online."It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said, though he added that no timeframe has been set
2
. The expansion reflects how mental health struggles disclosed on chatbots have exploded with AI popularity, now including what Taylor describes as "dalliances with extremism."Related Stories
Henry Fraser, an AI researcher at Queensland University of Technology, called the chatbot rerouting tool "a good and necessary idea because it recognizes that it's not just content that is the problem, but relationship dynamics"
1
. However, he noted success depends on "how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem."Taylor acknowledged that follow-up features, including possible alerts to authorities about dangerous users, remain under consideration but must account for risks of triggering escalated behavior. He warned that overly aggressive content moderation could backfire, noting that people in distress often share things online they're too embarrassed to tell another person. A 2025 study by New York University's Stern Center for Business and Human Rights found that heightened moderation by platforms under law enforcement pressure has pushed sympathizers toward less regulated alternatives like Telegram
2
."If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support," Taylor explained
1
. This perspective underscores the delicate balance between online safety and maintaining trust with users who may need help most, as legal challenges against AI companies continue to mount over their handling of harmful content and user behavior.Summarized by
Navi
[2]
1
Policy and Regulation

2
Policy and Regulation

3
Technology
