Jointly modelling the evolution of community structure and language in online extremist groups
- URL: http://arxiv.org/abs/2409.19243v1
- Date: Sat, 28 Sep 2024 05:19:51 GMT
- Title: Jointly modelling the evolution of community structure and language in online extremist groups
- Authors: Christine de Kock,
- Abstract summary: Group interactions take place within a particular socio-temporal context, which should be taken into account when modelling communities.
We propose a method for jointly modelling community structure and language over time, and apply it in the context of extremist anti-women online groups.
- Score: 5.384630221560811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group interactions take place within a particular socio-temporal context, which should be taken into account when modelling communities. We propose a method for jointly modelling community structure and language over time, and apply it in the context of extremist anti-women online groups (collectively known as the manosphere). Our model derives temporally grounded embeddings for words and users, which evolve over the training window. We show that this approach outperforms prior models which lacked one of these components (i.e. not incorporating social structure, or using static word embeddings). Using these embeddings, we investigate the evolution of users and words within these communities in three ways: (i) we model a user as a sequence of embeddings and forecast their affinity groups beyond the training window, (ii) we illustrate how word evolution is useful in the context of temporal events, and (iii) we characterise the propensity for violent language within subgroups of the manosphere.
Related papers
- LISTN: Lexicon induction with socio-temporal nuance [5.384630221560811]
This paper proposes a novel method for inducing in-group lexicons which incorporates its socio-temporal context.
Using dynamic word and user embeddings trained on conversations from online anti-women communities, our approach outperforms prior methods for lexicon induction.
We present novel insights on in-group language which illustrate the utility of this approach.
arXiv Detail & Related papers (2024-09-28T06:20:20Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Compositional Temporal Grounding with Structured Variational Cross-Graph
Correspondence Learning [92.07643510310766]
Temporal grounding in videos aims to localize one target video segment that semantically corresponds to a given query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We empirically find that they fail to generalize to queries with novel combinations of seen words.
We propose a variational cross-graph reasoning framework that explicitly decomposes video and language into multiple structured hierarchies.
arXiv Detail & Related papers (2022-03-24T12:55:23Z) - Group-Node Attention for Community Evolution Prediction [9.777369108179501]
We present a novel graph neural network for predicting community evolution events from structural and temporal information.
A comparative evaluation with standard baseline methods is performed and we demonstrate that our model outperforms the baselines.
arXiv Detail & Related papers (2021-07-09T16:16:10Z) - Creolizing the Web [2.393911349115195]
We present a method for detecting evolutionary patterns in a sociological model of language evolution.
We develop a minimalistic model that provides a rigorous base for any generalized evolutionary model for language based on communication between individuals.
We present empirical results and their interpretations on a real world dataset from rdt to identify communities and echo chambers for opinions.
arXiv Detail & Related papers (2021-02-24T16:08:45Z) - Characterizing English Variation across Social Media Communities with
BERT [9.98785450861229]
We analyze two months of English comments in 474 Reddit communities.
The specificity of different sense clusters to a community, combined with the specificity of a community's unique word types, is used to identify cases where a social group's language deviates from the norm.
We find that communities with highly distinctive language are medium-sized, and their loyal and highly engaged users interact in dense networks.
arXiv Detail & Related papers (2021-02-12T23:50:57Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z) - GRADE: Graph Dynamic Embedding [76.85156209917932]
GRADE is a probabilistic model that learns to generate evolving node and community representations by imposing a random walk prior to their trajectories.
Our model also learns node community membership which is updated between time steps via a transition matrix.
Experiments demonstrate GRADE outperforms baselines in dynamic link prediction, shows favourable performance on dynamic community detection, and identifies coherent and interpretable evolving communities.
arXiv Detail & Related papers (2020-07-16T01:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.