Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization
- URL: http://arxiv.org/abs/2501.04820v1
- Date: Wed, 08 Jan 2025 20:17:24 GMT
- Title: Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization
- Authors: Allison Lahnala, Vasudha Varadarajan, Lucie Flek, H. Andrew Schwartz, Ryan L. Boyd,
- Abstract summary: We propose a novel method for extracting and analyzing extremist discourse across a range of online community forums.
By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels.
Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach.
- Score: 13.611821646402818
- License:
- Abstract: The proliferation of ideological movements into extremist factions via social media has become a global concern. While radicalization has been studied extensively within the context of specific ideologies, our ability to accurately characterize extremism in more generalizable terms remains underdeveloped. In this paper, we propose a novel method for extracting and analyzing extremist discourse across a range of online community forums. By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels. Our research identifies 11 distinct factors, which we term ``The Extremist Eleven,'' as a generalized psychosocial model of extremism. Applying our method to various online communities, we demonstrate an ability to characterize ideologically diverse communities across the 11 extremist traits. We demonstrate the power of this method by analyzing user histories from members of the incel community. We find that our framework accurately predicts which users join the incel community up to 10 months before their actual entry with an AUC of $>0.6$, steadily increasing to AUC ~0.9 three to four months before the event. Further, we find that upon entry into an extremist forum, the users tend to maintain their level of extremism within the community, while still remaining distinguishable from the general online discourse. Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach that transcends traditional, trait-specific models.
Related papers
- From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - A Lexicon for Studying Radicalization in Incel Communities [0.8919993498343158]
Incels are an extremist online community of men who believe in an ideology rooted in misogyny, racism, the glorification of violence, and dehumanization.
This paper presents a lexicon with terms and definitions for common incel root words, prefixes, and affixes.
arXiv Detail & Related papers (2024-01-15T19:39:29Z) - Dynamic Matrix of Extremisms and Terrorism (DMET): A Continuum Approach
Towards Identifying Different Degrees of Extremisms [0.0]
We propose to extend the current binary understanding of terrorism (versus non-terrorism) with a Dynamic Matrix of Extremisms and Terrorism (DMET)
DMET considers the whole ecosystem of content and actors that can contribute to a continuum of extremism.
It organizes levels of extremisms by varying degrees of ideological engagement and the presence of violence.
arXiv Detail & Related papers (2023-12-01T04:13:48Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and
Politicised Hate Speech [1.0323063834827415]
This study provides the first cross-examination of textual, network visual approaches to detecting extremist content.
We identify consensus-driven ERH definitions and propose solutions, particularly due to the lack of research in Oceania/Australasia.
We conclude with vital recommendations for ERH mining researchers and propose roadmap with guidelines for researchers, industries, and governments to enable safer cyberspace.
arXiv Detail & Related papers (2023-01-27T07:59:31Z) - This Must Be the Place: Predicting Engagement of Online Communities in a
Large-scale Distributed Campaign [70.69387048368849]
We study the behavior of communities with millions of active members.
We develop a hybrid model, combining textual cues, community meta-data, and structural properties.
We demonstrate the applicability of our model through Reddit's r/place a large-scale online experiment.
arXiv Detail & Related papers (2022-01-14T08:23:16Z) - ExtremeBB: A Database for Large-Scale Research into Online Hate,
Harassment, the Manosphere and Extremism [12.647120939857635]
We introduce ExtremeBB, a textual database of over 53.5M posts made by 38.5k users on 12 extremist bulletin board forums promoting online hate, harassment, the manosphere and other forms of extremism.
It enables large-scale analyses of qualitative and quantitative historical trends going back two decades.
ExtremeBB comes with a robust ethical data-sharing regime that allows us to share data with academics worldwide.
arXiv Detail & Related papers (2021-11-08T13:15:25Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - #ISIS vs #ActionCountersTerrorism: A Computational Analysis of Extremist
and Counter-extremist Twitter Narratives [2.685668802278155]
This study will apply computational techniques to analyse the narratives of various pro-extremist and counter-extremist Twitter accounts.
Our findings show that pro-extremist accounts often use different strategies to disseminate content when compared to counter-extremist accounts across different types of organisations.
arXiv Detail & Related papers (2020-08-26T20:46:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.