Empowering NGOs in Countering Online Hate Messages
- URL: http://arxiv.org/abs/2107.02472v1
- Date: Tue, 6 Jul 2021 08:36:24 GMT
- Title: Empowering NGOs in Countering Online Hate Messages
- Authors: Yi-Ling Chung, Serra Sinem Tekiroglu, Sara Tonelli, Marco Guerini
- Abstract summary: We introduce a novel ICT platform that NGO operators can use to monitor and analyze social media data, along with a counter-narrative suggestion tool.
We test the platform with more than one hundred NGO operators in three countries through qualitative and quantitative evaluation.
Results show that NGOs favor the platform solution with the suggestion tool, and that the time required to produce counter-narratives significantly decreases.
- Score: 14.767716319266997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studies on online hate speech have mostly focused on the automated detection
of harmful messages. Little attention has been devoted so far to the
development of effective strategies to fight hate speech, in particular through
the creation of counter-messages. While existing manual scrutiny and
intervention strategies are time-consuming and not scalable, advances in
natural language processing have the potential to provide a systematic approach
to hatred management. In this paper, we introduce a novel ICT platform that NGO
operators can use to monitor and analyze social media data, along with a
counter-narrative suggestion tool. Our platform aims at increasing the
efficiency and effectiveness of operators' activities against islamophobia. We
test the platform with more than one hundred NGO operators in three countries
through qualitative and quantitative evaluation. Results show that NGOs favor
the platform solution with the suggestion tool, and that the time required to
produce counter-narratives significantly decreases.
Related papers
- IOHunter: Graph Foundation Model to Uncover Online Information Operations [8.532129691916348]
We introduce a methodology designed to identify users orchestrating information operations, a.k.a. IO drivers, across various influence campaigns.
Our framework, named IOHunter, leverages the combined strengths of Language Models and Graph Neural Networks to improve generalization in supervised, scarcely-supervised, and cross-IO contexts.
This research marks a step toward developing Graph Foundation Models specifically tailored for the task of IO detection on social media platforms.
arXiv Detail & Related papers (2024-12-19T09:14:24Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms [4.649475179575046]
We consider a multimodal aggregative approach for the detection of hate-mongers, taking into account the potentially hateful texts, user activity, and the user network.
Our method can be used to improve the classification of coded messages, dog-whistling, and racial gas-lighting, as well as inform intervention measures.
arXiv Detail & Related papers (2024-09-22T14:29:49Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - Understanding Counterspeech for Online Harm Mitigation [12.104301755723542]
Counterspeech offers direct rebuttals to hateful speech by challenging perpetrators of hate and showing support to targets of abuse.
It provides a promising alternative to more contentious measures, such as content moderation and deplatforming.
This paper systematically reviews counterspeech research in the social sciences and compares methodologies and findings with computer science efforts in automatic counterspeech generation.
arXiv Detail & Related papers (2023-07-01T20:54:01Z) - Tackling Hate Speech in Low-resource Languages with Context Experts [7.5217405965075095]
This paper presents findings from our remote study on the automatic detection of hate speech online in Myanmar.
We argue that effectively addressing this problem will require community-based approaches that combine the knowledge of context experts with machine learning tools.
arXiv Detail & Related papers (2023-03-29T16:24:22Z) - Deep Learning for Hate Speech Detection: A Comparative Study [54.42226495344908]
We present here a large-scale empirical comparison of deep and shallow hate-speech detection methods.
Our goal is to illuminate progress in the area, and identify strengths and weaknesses in the current state-of-the-art.
In doing so we aim to provide guidance as to the use of hate-speech detection in practice, quantify the state-of-the-art, and identify future research directions.
arXiv Detail & Related papers (2022-02-19T03:48:20Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Towards A Multi-agent System for Online Hate Speech Detection [11.843799418046666]
This paper envisions a multi-agent system for detecting the presence of hate speech in online social media platforms such as Twitter and Facebook.
We introduce a novel framework employing deep learning techniques to coordinate the channels of textual and im-age processing.
arXiv Detail & Related papers (2021-05-03T19:06:42Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - A Survey on Computational Propaganda Detection [31.42480765785039]
Propaganda campaigns aim at influencing people's mindset with the purpose of advancing a specific agenda.
They exploit the anonymity of the Internet, the micro-profiling ability of social networks, and the ease of automatically creating and managing coordinated networks of accounts.
arXiv Detail & Related papers (2020-07-15T22:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.