2 Sources
[1]
AI finds speech patterns in Reddit hate groups mirror those in some psychiatric forums
A new analysis suggests that posts in hate speech communities on the social media website Reddit share speech-pattern similarities with posts in Reddit communities for certain psychiatric disorders. Dr. Andrew William Alexander and Dr. Hongbin Wang of Texas A&M University, U.S., present these findings in the open-access journal PLOS Digital Health. The ubiquity of social media has raised concerns about its role in spreading hate speech and misinformation, potentially contributing to prejudice, discrimination and real-world violence. Prior research has uncovered associations between certain personality traits and the act of posting online hate speech or misinformation. However, whether any associations exist between psychological well-being and online hate speech or misinformation has been unclear. To help clarify, Alexander and Wang used artificial intelligence tools to analyze posts from 54 Reddit communities relevant to hate speech, misinformation, psychiatric disorders, or, for neutral comparison, none of those categories. Selected groups included r/ADHD, a community for discussing attention-deficit/hyperactivity disorder, r/NoNewNormal, dedicated to COVID-19 misinformation, and r/Incels, a community banned for hate speech. The researchers used the large-language model GPT3 to convert thousands of posts from these communities into numerical representations capturing the posts' underlying speech patterns. These representations, or "embeddings," could then be analyzed through machine-learning techniques and a mathematical approach known as topological data analysis. This analysis showed that speech patterns in hate speech communities were similar to speech patterns in communities for complex post-traumatic stress disorder, and borderline, narcissistic and antisocial personality disorders. Links between misinformation and psychiatric disorders were less clear, but with some connections to anxiety disorders. Importantly, these findings do not at all suggest that people with psychiatric disorders are more prone to hate speech or misinformation. For one, there was no way of knowing if the analyzed posts were made by people actually diagnosed with disorders. More research is needed to understand the links and explore such possibilities as hate speech communities mimicking speech patterns seen in psychiatric disorders. The authors suggest their findings could help inform new strategies to combat online hate speech and misinformation, such as treating them using elements of therapy developed for psychiatric disorders. The authors add, "Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders. Chief among these are the Cluster B personality disorders: Narcissistic Personality Disorder, Antisocial Personality Disorder, and Borderline Personality Disorder. "These disorders are generally known for either lack of empathy/regard towards the well-being of others, or difficulties managing anger and relationships with others." Alexander notes, "While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Besides a potential anxiety component, I think it is safe to say at this point in time that most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint." Alexander concludes, "I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders. "It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders, at least with regards to the target of their hate speech. "While further studies would be needed to confirm this, I think it is a good indicator that exposing ourselves to these types of communities for long periods of time is not healthy and can make us less empathetic towards others."
[2]
Online Hate Speech Resembles Mental Health Disorder Language - Neuroscience News
Q: What did this study find about hate speech and psychiatric disorders? A: Posts in online hate speech communities show speech-pattern similarities to posts in communities for personality disorders like borderline, narcissistic, and antisocial personality disorder. Q: Does this mean people with psychiatric disorders are more hateful? A: No. The researchers emphasize that they cannot know if users had actual diagnoses -- only that the language patterns were similar, possibly due to shared traits like low empathy or emotional dysregulation. Q: Why does this matter for online safety and mental health? A: Understanding that hate speech mirrors certain psychological speech styles could help develop therapeutic or community-based strategies to combat toxic online behavior. Summary: A new study using AI tools found that posts in online hate speech communities closely resemble the speech patterns seen in forums for certain personality disorders. While it doesn't imply that people with psychiatric diagnoses are more prone to hate, the overlap suggests that online hate speech may cultivate traits like low empathy and emotional instability. Posts from communities for personality disorders had the most linguistic similarity to hate speech groups. These findings may inform future interventions by adapting therapeutic strategies typically used for managing such disorders. A new analysis suggests that posts in hate speech communities on the social media website Reddit share speech-pattern similarities with posts in Reddit communities for certain psychiatric disorders. Dr. Andrew William Alexander and Dr. Hongbin Wang of Texas A&M University, U.S., present these findings July 29 in the open-access journal PLOS Digital Health. The ubiquity of social media has raised concerns about its role in spreading hate speech and misinformation, potentially contributing to prejudice, discrimination and real-world violence. Prior research has uncovered associations between certain personality traits and the act of posting online hate speech or misinformation. However, whether any associations exist between psychological wellbeing and online hate speech or misinformation has been unclear. To help clarify, Alexander and Wang used artificial intelligence tools to analyze posts from 54 Reddit communities relevant to hate speech, misinformation, psychiatric disorders, or, for neutral comparison, none of those categories. Selected groups included r/ADHD, a community for discussing attention-deficit/hyperactivity disorder, r/NoNewNormal, dedicated to COVID-19 misinformation, and r/Incels, a community banned for hate speech. The researchers used the large-language model GPT3 to convert thousands of posts from these communities into numerical representations capturing the posts' underlying speech patterns. These representations, or "embeddings," could then be analyzed through machine-learning techniques and a mathematical approach known as topological data analysis. This analysis showed that speech patterns in hate speech communities were similar to speech patterns in communities for complex post-traumatic stress disorder, and borderline, narcissistic and antisocial personality disorders. Links between misinformation and psychiatric disorders were less clear, but with some connections to anxiety disorders. Importantly, these findings do not at all suggest that people with psychiatric disorders are more prone to hate speech or misinformation. For one, there was no way of knowing if the analyzed posts were made by people actually diagnosed with disorders. More research is needed to understand the links and explore such possibilities as hate speech communities mimicking speech patterns seen in psychiatric disorders. The authors suggest their findings could help inform new strategies to combat online hate speech and misinformation, such as treating them using elements of therapy developed for psychiatric disorders. The authors add, "Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders. "Chief among these are the Cluster B personality disorders: Narcissistic Personality Disorder, Antisocial Personality Disorder, and Borderline Personality Disorder. These disorders are generally known for either lack of empathy/regard towards the wellbeing of others, or difficulties managing anger and relationships with others." Alexander notes, "While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Besides a potential anxiety component, I think it is safe to say at this point in time that most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint." Alexander concludes, "I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders. "It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders, at least with regards to the target of their hate speech. "While further studies would be needed to confirm this, I think it is a good indicator that exposing ourselves to these types of communities for long periods of time is not healthy and can make us less empathetic towards others." Funding: AWA was a Burroughs Wellcome Fund Scholar supported by a Burroughs Wellcome Fund Physician Scientist Institutional Award (G-1020069) to the Texas A&M University Academy of Physician Scientists (https://www.bwfund.org/funding-opportunities/biomedical-sciences/physician-scientist-institutional-award/grant-recipients/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. HW received no specific funding for this work. Author: Claire Turner Source: PLOS Contact: Claire Turner - PLOS Image: The image is credited to Neuroscience News Original Research: Open access. "Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study" by Andrew Alexander et al. PLOS Digital Health Abstract Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study The advent of social media has led to an increased concern over its potential to propagate hate speech and misinformation, which, in addition to contributing to prejudice and discrimination, has been suspected of playing a role in increasing social violence and crimes in the United States. While literature has shown the existence of an association between posting hate speech and misinformation online and certain personality traits of posters, the general relationship and relevance of online hate speech/misinformation in the context of overall psychological wellbeing of posters remain elusive. One difficulty lies in finding data analytics tools capable of adequately analyzing the massive amount of social media posts to uncover the underlying hidden links. Machine learning and large language models such as ChatGPT make such an analysis possible. In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit. We then utilized OpenAI's GPT3 to derive embeddings of these posts, which are high-dimensional real-numbered vectors that presumably represent the hidden semantics of posts. We then performed various machine-learning classifications based on these embeddings in order to identify potential similarities between hate speech/misinformation speech patterns and those of various communities. Finally, a topological data analysis (TDA) was applied to the embeddings to obtain a visual map connecting online hate speech, misinformation, various psychiatric disorders, and general mental health.
Share
Copy Link
A new AI-powered analysis finds striking similarities between speech patterns in online hate communities and those in forums for certain psychiatric disorders, potentially offering new insights for combating toxic online behavior.
A groundbreaking study conducted by Dr. Andrew William Alexander and Dr. Hongbin Wang from Texas A&M University has uncovered intriguing similarities between speech patterns in online hate communities and those found in forums dedicated to certain psychiatric disorders. The research, published in the open-access journal PLOS Digital Health, utilized advanced artificial intelligence tools to analyze posts from 54 Reddit communities 1.
Source: Phys.org
The researchers employed the large language model GPT-3 to convert thousands of posts into numerical representations, or "embeddings," capturing underlying speech patterns. These were then analyzed using machine learning techniques and topological data analysis 1.
The analysis revealed striking similarities between speech patterns in hate speech communities and those in forums dedicated to complex post-traumatic stress disorder, as well as borderline, narcissistic, and antisocial personality disorders. Interestingly, links between misinformation and psychiatric disorders were less pronounced, with only some connections to anxiety disorders 2.
Dr. Alexander emphasized that these findings do not suggest that individuals with psychiatric disorders are more likely to engage in hate speech. Instead, the results indicate that people participating in online hate speech tend to exhibit speech patterns similar to those with Cluster B personality disorders 1.
Source: Neuroscience News
The researchers hypothesize that prolonged exposure to hate speech communities might foster traits similar to those seen in Cluster B personality disorders, particularly concerning empathy towards others. However, they stress the need for further studies to confirm this hypothesis 2.
These findings could potentially inform new strategies to combat online hate speech and misinformation. The researchers suggest that elements of therapy developed for psychiatric disorders could be adapted to address toxic online behavior 1.
Dr. Alexander noted that the connections between misinformation and psychiatric disorders were much weaker, suggesting that most individuals involved in spreading misinformation are "quite healthy from a psychiatric standpoint" 2.
This study sheds light on the complex relationship between online behavior, hate speech, and mental health. By understanding the linguistic similarities between hate speech and certain psychological speech styles, researchers and policymakers may be better equipped to develop targeted interventions and community-based strategies to promote healthier online interactions [2](https://neurosciencenews.com/online-hate-speech-personality-disorder-29537/].
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
10 hrs ago
11 Sources
Business
10 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
18 hrs ago
22 Sources
Business
18 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
18 hrs ago
15 Sources
Technology
18 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
11 hrs ago
8 Sources
Technology
11 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
10 hrs ago
10 Sources
Technology
10 hrs ago