A Lexicon for Studying Radicalization in Incel Communities
- URL: http://arxiv.org/abs/2401.07928v1
- Date: Mon, 15 Jan 2024 19:39:29 GMT
- Title: A Lexicon for Studying Radicalization in Incel Communities
- Authors: Emily Klein and Jennifer Golbeck
- Abstract summary: Incels are an extremist online community of men who believe in an ideology rooted in misogyny, racism, the glorification of violence, and dehumanization.
This paper presents a lexicon with terms and definitions for common incel root words, prefixes, and affixes.
- Score: 0.8919993498343158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incels are an extremist online community of men who believe in an ideology
rooted in misogyny, racism, the glorification of violence, and dehumanization.
In their online forums, they use an extensive, evolving cryptolect - a set of
ingroup terms that have meaning within the group, reflect the ideology,
demonstrate membership in the community, and are difficult for outsiders to
understand. This paper presents a lexicon with terms and definitions for common
incel root words, prefixes, and affixes. The lexicon is text-based for use in
automated analysis and is derived via a Qualitative Content Analysis of the
most frequent incel words, their structure, and their meaning on five of the
most active incel communities from 2016 to 2023. This lexicon will support
future work examining radicalization and deradicalization/disengagement within
the community.
Related papers
- LISTN: Lexicon induction with socio-temporal nuance [5.384630221560811]
This paper proposes a novel method for inducing in-group lexicons which incorporates its socio-temporal context.
Using dynamic word and user embeddings trained on conversations from online anti-women communities, our approach outperforms prior methods for lexicon induction.
We present novel insights on in-group language which illustrate the utility of this approach.
arXiv Detail & Related papers (2024-09-28T06:20:20Z) - Identity Construction in a Misogynist Incels Forum [5.260305201345233]
We examine how identity groups are discussed on incels-dot-is, the largest black-pilled incels forum.
We find that this community produces a wide range of novel identity terms and, while terms for women are most common, mentions of other minoritized identities are increasing.
An analysis of the associations made with identity groups suggests an essentialist ideology where physical appearance, as well as gender and racial hierarchies, determine human value.
arXiv Detail & Related papers (2023-06-27T18:56:28Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - The incel lexicon: Deciphering the emergent cryptolect of a global
misogynistic community [0.0]
Incel is an online community of men who bear antipathy towards themselves, women, and society-at-large for their perceived inability to find and maintain sexual relationships.
By exploring incel language use on Reddit, we contextualize the incel community's online expressions of misogyny and real-world acts of violence perpetrated against women.
arXiv Detail & Related papers (2021-05-25T15:20:13Z) - The structure of online social networks modulates the rate of lexical
change [7.4037154707453965]
We conduct a large-scale analysis of over 80k neologisms in 4420 online communities across a decade.
Using Poisson regression and survival analysis, our study demonstrates that the community's network structure plays a significant role in lexical change.
arXiv Detail & Related papers (2021-04-11T13:06:28Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - A Framework for the Computational Linguistic Analysis of Dehumanization [52.735780962665814]
We analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
We find increasingly humanizing descriptions of LGBTQ people over time.
The ability to analyze dehumanizing language at a large scale has implications for automatically detecting and understanding media bias as well as abusive language online.
arXiv Detail & Related papers (2020-03-06T03:02:12Z) - "How over is it?" Understanding the Incel Community on YouTube [13.152169704668568]
Involuntary Celibates (Incels) are a community that has often been linked to sharing and publishing hateful and misogynistic content.
We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube.
We find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially.
arXiv Detail & Related papers (2020-01-22T21:47:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.