Qualitative Analysis of a Graph Transformer Approach to Addressing Hate
Speech: Adapting to Dynamically Changing Content
- URL: http://arxiv.org/abs/2301.10871v3
- Date: Mon, 1 May 2023 02:53:18 GMT
- Title: Qualitative Analysis of a Graph Transformer Approach to Addressing Hate
Speech: Adapting to Dynamically Changing Content
- Authors: Liam Hebert, Hong Yi Chen, Robin Cohen, Lukasz Golab
- Abstract summary: We offer a detailed qualitative analysis of this solution for hate speech detection in social networks.
A key insight is that the focus on reasoning about the concept of context positions us well to be able to support multi-modal analysis of online posts.
We conclude with a reflection on how the problem we are addressing relates especially well to the theme of dynamic change.
- Score: 8.393770595114763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our work advances an approach for predicting hate speech in social media,
drawing out the critical need to consider the discussions that follow a post to
successfully detect when hateful discourse may arise. Using graph transformer
networks, coupled with modelling attention and BERT-level natural language
processing, our approach can capture context and anticipate upcoming
anti-social behaviour. In this paper, we offer a detailed qualitative analysis
of this solution for hate speech detection in social networks, leading to
insights into where the method has the most impressive outcomes in comparison
with competitors and identifying scenarios where there are challenges to
achieving ideal performance. Included is an exploration of the kinds of posts
that permeate social media today, including the use of hateful images. This
suggests avenues for extending our model to be more comprehensive. A key
insight is that the focus on reasoning about the concept of context positions
us well to be able to support multi-modal analysis of online posts. We conclude
with a reflection on how the problem we are addressing relates especially well
to the theme of dynamic change, a critical concern for all AI solutions for
social impact. We also comment briefly on how mental health well-being can be
advanced with our work, through curated content attuned to the extent of hate
in posts.
Related papers
- A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Modes of Analyzing Disinformation Narratives With AI/ML/Text Mining to Assist in Mitigating the Weaponization of Social Media [0.8287206589886879]
This paper highlights the developing need for quantitative modes for capturing and monitoring malicious communication in social media.
There has been a deliberate "weaponization" of messaging through the use of social networks including by politically oriented entities both state sponsored and privately run.
Despite attempts to introduce moderation on major platforms like Facebook and X/Twitter, there are now established alternative social networks that offer completely unmoderated spaces.
arXiv Detail & Related papers (2024-05-25T00:02:14Z) - SoMeLVLM: A Large Vision Language Model for Social Media Processing [78.47310657638567]
We introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM)
SoMeLVLM is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation.
Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks.
arXiv Detail & Related papers (2024-02-20T14:02:45Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Predicting Hateful Discussions on Reddit using Graph Transformer
Networks and Communal Context [9.4337569682766]
We propose a system to predict harmful discussions on social media platforms.
Our solution uses contextual deep language models and integrates state-of-the-art Graph Transformer Networks.
We evaluate our approach on 333,487 Reddit discussions from various communities.
arXiv Detail & Related papers (2023-01-10T23:47:13Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Assessing the impact of contextual information in hate speech detection [0.48369513656026514]
We provide a novel corpus for contextualized hate speech detection based on user responses to news posts from media outlets on Twitter.
This corpus was collected in the Rioplatense dialectal variety of Spanish and focuses on hate speech associated with the COVID-19 pandemic.
arXiv Detail & Related papers (2022-10-02T09:04:47Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - Anti-Asian Hate Speech Detection via Data Augmented Semantic Relation
Inference [4.885207279350052]
We propose a novel approach to leverage sentiment hashtags to enhance hate speech detection in a natural language inference framework.
We design a novel framework SRIC that simultaneously performs two tasks: (1) semantic relation inference between online posts and sentiment hashtags, and (2) sentiment classification on these posts.
arXiv Detail & Related papers (2022-04-14T15:03:35Z) - Interpretable Multi-Modal Hate Speech Detection [32.36781061930129]
We propose a deep neural multi-modal model that can effectively capture the semantics of the text along with socio-cultural context in which a particular hate expression is made.
Our model is able to outperform the existing state-of-the-art hate speech classification approaches.
arXiv Detail & Related papers (2021-03-02T10:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.