A recent study has uncovered that posts made in hate speech communities on Reddit exhibit speech patterns similar to those found in forums dedicated to certain psychiatric disorders. Conducted by Dr. Andrew William Alexander and Dr. Hongbin Wang from Texas A&M University, the findings were published in the open-access journal PLOS Digital Health on July 29, 2025.
The analysis highlights growing concerns about the role of social media in amplifying hate speech and misinformation, which may contribute to prejudice and real-world violence. Prior research has indicated correlations between specific personality traits and the act of posting hate speech or misinformation online. However, the relationship between psychological well-being and these online behaviors had not been clearly established until now.
To investigate this connection, Alexander and Wang utilized artificial intelligence algorithms to analyze posts from 54 Reddit communities. These included groups related to hate speech, misinformation, and psychiatric disorders, as well as neutral comparison groups. Notable communities analyzed were r/ADHD for attention-deficit/hyperactivity disorder, r/NoNewNormal focused on COVID-19 misinformation, and r/Incels, a community banned for promoting hate speech.
The researchers employed the large-language model GPT-3 to convert thousands of posts into numerical representations that capture their underlying speech patterns. This approach allowed them to employ machine-learning techniques and a mathematical method known as topological data analysis. Their findings revealed that the speech patterns in hate speech communities bore similarities to those associated with complex post-traumatic stress disorder, as well as borderline, narcissistic, and antisocial personality disorders.
While the analysis suggested some connections between misinformation and psychiatric disorders, these links were less definitive, showing only a potential association with anxiety disorders. Importantly, the study does not imply that individuals diagnosed with psychiatric conditions are more likely to engage in hate speech or spread misinformation. The posts analyzed could have been authored by individuals without any psychiatric diagnosis.
The authors propose that their findings could lead to new strategies for combating online hate speech and misinformation, possibly through therapeutic approaches designed for psychiatric disorders. Dr. Alexander stated, “Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders.” He highlighted that these include the Cluster B personality disorders, which are characterized by a lack of empathy or difficulties in managing anger and relationships.
Dr. Alexander further clarified the distinction between those spreading misinformation and individuals with psychiatric disorders, saying, “While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint.”
He emphasized the importance of understanding the implications of these findings, suggesting that exposure to hate speech communities could diminish empathy over time. “It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders,” he noted.
The study calls for further research to explore these relationships and to better understand how online environments may shape speech and behavior. As social media continues to evolve, the implications of this research could be significant in informing approaches to mitigate the spread of hate speech and misinformation online.
For further reading, the full study is available under the title “Topological data mapping of online hate speech, misinformation, and general mental health: A large language model-based study” in PLOS Digital Health.