AI Enabled User-Specific Cyberbullying Severity Detection with Explainability
- URL: http://arxiv.org/abs/2503.10650v2
- Date: Mon, 14 Apr 2025 19:01:43 GMT
- Title: AI Enabled User-Specific Cyberbullying Severity Detection with Explainability
- Authors: Tabia Tanzin Prama, Jannatul Ferdaws Amrin, Md. Mushfique Anwar, Iqbal H. Sarker,
- Abstract summary: We propose an AI model intregrating user-specific attributes, including psychological factors (self-esteem, anxiety, depression), online behavior (internet usage, disciplinary history), and demographic attributes (race, gender, ethnicity), along with social media comments.<n>Our LSTM model is trained using 146 features, incorporating emotional, topical, and word2vec representations of social media comments as well as user-level attributes.<n>Our findings reveal that, beyond hate comments, victims belonging to specific racial and gender groups are more frequently targeted and exhibit higher incidences of depression, disciplinary issues, and low self-esteem.
- Score: 1.949927790632678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of social media has significantly increased the prevalence of cyberbullying (CB), posing serious risks to both mental and physical well-being. Effective detection systems are essential for mitigating its impact. While several machine learning (ML) models have been developed, few incorporate victims' psychological, demographic, and behavioral factors alongside bullying comments to assess severity. In this study, we propose an AI model intregrating user-specific attributes, including psychological factors (self-esteem, anxiety, depression), online behavior (internet usage, disciplinary history), and demographic attributes (race, gender, ethnicity), along with social media comments. Additionally, we introduce a re-labeling technique that categorizes social media comments into three severity levels: Not Bullying, Mild Bullying, and Severe Bullying, considering user-specific factors.Our LSTM model is trained using 146 features, incorporating emotional, topical, and word2vec representations of social media comments as well as user-level attributes and it outperforms existing baseline models, achieving the highest accuracy of 98\% and an F1-score of 0.97. To identify key factors influencing the severity of cyberbullying, we employ explainable AI techniques (SHAP and LIME) to interpret the model's decision-making process. Our findings reveal that, beyond hate comments, victims belonging to specific racial and gender groups are more frequently targeted and exhibit higher incidences of depression, disciplinary issues, and low self-esteem. Additionally, individuals with a prior history of bullying are at a greater risk of becoming victims of cyberbullying.
Related papers
- CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense [61.78357530675446]
Humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors.
Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation.
For an adversarial example, we aim to discriminate perturbations as non-causative factors and make predictions only based on the label-causative factors.
arXiv Detail & Related papers (2024-10-30T15:06:44Z) - Detecting LGBTQ+ Instances of Cyberbullying [3.5723815685584013]
Cyberbullying poses a significant threat to adolescents globally.
The LGBTQ+ community is particularly at risk, as researchers have uncovered a strong correlation between identifying as LGBTQ+ and suffering from greater online harassment.
It is critical to develop machine learning models that can accurately discern cyberbullying incidents as they happen to LGBTQ+ members.
arXiv Detail & Related papers (2024-09-18T18:49:55Z) - Explain Thyself Bully: Sentiment Aided Cyberbullying Detection with
Explanation [52.3781496277104]
Cyberbullying has become a big issue with the popularity of different social media networks and online communication apps.
Recent laws like "right to explanations" of General Data Protection Regulation have spurred research in developing interpretable models.
We develop first interpretable multi-task model called em mExCB for automatic cyberbullying detection from code-mixed languages.
arXiv Detail & Related papers (2024-01-17T07:36:22Z) - Deep Learning Based Cyberbullying Detection in Bangla Language [0.0]
This study demonstrates a deep learning strategy for identifying cyberbullying in Bengali.
A two-layer bidirectional long short-term memory (Bi-LSTM) model has been built to identify cyberbullying.
arXiv Detail & Related papers (2024-01-07T04:58:59Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Analysing Cyberbullying using Natural Language Processing by
Understanding Jargon in Social Media [4.932130498861987]
In our work, we explore binary classification by using a combination of datasets from various social media platforms.
We experiment through multiple models such as Bi-LSTM, GloVe, state-of-the-art models like BERT, and apply a unique preprocessing technique by introducing a slang-abusive corpus.
arXiv Detail & Related papers (2021-04-23T04:20:19Z) - Enhancing the Identification of Cyberbullying through Participant Roles [1.399948157377307]
This paper proposes a novel approach to enhancing cyberbullying detection through role modeling.
We utilise a dataset from ASKfm to perform multi-class classification to detect participant roles.
arXiv Detail & Related papers (2020-10-13T19:13:07Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining
Representations for Cyberbullying Classification [4.945634077636197]
We study the nuanced problem of cyberbullying using five explicit factors to represent its social and linguistic aspects.
These results demonstrate the importance of representing and modeling cyberbullying as a social phenomenon.
arXiv Detail & Related papers (2020-04-04T00:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.