Modeling Aggression Propagation on Social Media
- URL: http://arxiv.org/abs/2002.10131v3
- Date: Fri, 25 Jun 2021 09:13:40 GMT
- Title: Modeling Aggression Propagation on Social Media
- Authors: Chrysoula Terizi, Despoina Chatzakou, Evaggelia Pitoura, Panayiotis
Tsaparas and Nicolas Kourtellis
- Abstract summary: Cyberaggression has been studied in various contexts and online social platforms.
We study propagation of aggression on social media using opinion dynamics.
We propose ways to model how aggression may propagate from one user to another, depending on how each user is connected to other aggressive or regular users.
- Score: 4.99023186931786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyberaggression has been studied in various contexts and online social
platforms, and modeled on different data using state-of-the-art machine and
deep learning algorithms to enable automatic detection and blocking of this
behavior. Users can be influenced to act aggressively or even bully others
because of elevated toxicity and aggression in their own (online) social
circle. In effect, this behavior can propagate from one user and neighborhood
to another, and therefore, spread in the network. Interestingly, to our
knowledge, no work has modeled the network dynamics of aggressive behavior. In
this paper, we take a first step towards this direction by studying propagation
of aggression on social media using opinion dynamics. We propose ways to model
how aggression may propagate from one user to another, depending on how each
user is connected to other aggressive or regular users. Through extensive
simulations on Twitter data, we study how aggressive behavior could propagate
in the network. We validate our models with crawled and annotated ground truth
data, reaching up to 80% AUC, and discuss the results and implications of our
work.
Related papers
- Behavioral Homophily in Social Media via Inverse Reinforcement Learning: A Reddit Case Study [3.4034704508343028]
This work introduces a novel approach for quantifying user homophily.
We first use an Inverse Reinforcement Learning framework to infer users' policies, then use these policies as a measure of behavioral homophily.
We apply our method to Reddit, conducting a case study across 5.9 million interactions over six years.
arXiv Detail & Related papers (2025-02-05T07:16:45Z) - Non-Progressive Influence Maximization in Dynamic Social Networks [3.7618284656539878]
The influence (IM) problem involves identifying a set of key individuals in a social network who can maximize the spread of influence through their network connections.
In this paper, we focus on the dynamic non-progressive IM problem, which considers the dynamic nature of real-world social networks.
We propose a novel algorithm that effectively leverages graph embedding to capture the temporal changes of dynamic networks and seamlessly integrates with deep reinforcement learning.
arXiv Detail & Related papers (2024-12-10T10:52:32Z) - A Survey on Online User Aggression: Content Detection and Behavioral Analysis on Social Media [1.568356637037272]
The rise of social media platforms has led to an increase in cyber-aggressive behavior, including cyberbullying, online harassment, and the dissemination of offensive and hate speech.
These behaviors have been associated with significant societal consequences, ranging from online anonymity to real-world outcomes such as depression, suicidal tendencies, and, in some instances, offline violence.
This paper delves into the field of Aggression Content Detection and Behavioral Analysis of Aggressive Users, aiming to bridge the gap between disparate studies.
arXiv Detail & Related papers (2023-11-15T20:59:13Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - When Cyber Aggression Prediction Meets BERT on Social Media [1.0323063834827415]
We put forward the prediction model for cyber aggression based on the cutting-edge deep learning algorithm.
We elaborate cyber aggression on three dimensions: social exclusion, malicious humour, and guilt induction.
This study offers a solid theoretical model for cyber aggression prediction.
arXiv Detail & Related papers (2023-01-05T02:26:45Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Hater-O-Genius Aggression Classification using Capsule Networks [6.318682674371969]
We propose an end-to-end ensemble-based architecture to automatically identify and classify aggressive tweets.
Tweets are classified into three categories - Covertly Aggressive, Overtly Aggressive, and Non-Aggressive.
Our best model is an ensemble of Capsule Networks and results in a 65.2% F1 score on the Facebook test set, which results in a performance gain of 0.95% over the TRAC-2018 winners.
arXiv Detail & Related papers (2021-05-24T11:53:58Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.