Hater-O-Genius Aggression Classification using Capsule Networks
- URL: http://arxiv.org/abs/2105.11219v1
- Date: Mon, 24 May 2021 11:53:58 GMT
- Title: Hater-O-Genius Aggression Classification using Capsule Networks
- Authors: Parth Patwa, Srinivas PYKL, Amitava Das, Prerana Mukherjee, Viswanath
Pulabaigari
- Abstract summary: We propose an end-to-end ensemble-based architecture to automatically identify and classify aggressive tweets.
Tweets are classified into three categories - Covertly Aggressive, Overtly Aggressive, and Non-Aggressive.
Our best model is an ensemble of Capsule Networks and results in a 65.2% F1 score on the Facebook test set, which results in a performance gain of 0.95% over the TRAC-2018 winners.
- Score: 6.318682674371969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contending hate speech in social media is one of the most challenging social
problems of our time. There are various types of anti-social behavior in social
media. Foremost of them is aggressive behavior, which is causing many social
issues such as affecting the social lives and mental health of social media
users. In this paper, we propose an end-to-end ensemble-based architecture to
automatically identify and classify aggressive tweets. Tweets are classified
into three categories - Covertly Aggressive, Overtly Aggressive, and
Non-Aggressive. The proposed architecture is an ensemble of smaller subnetworks
that are able to characterize the feature embeddings effectively. We
demonstrate qualitatively that each of the smaller subnetworks is able to learn
unique features. Our best model is an ensemble of Capsule Networks and results
in a 65.2% F1 score on the Facebook test set, which results in a performance
gain of 0.95% over the TRAC-2018 winners. The code and the model weights are
publicly available at
https://github.com/parthpatwa/Hater-O-Genius-Aggression-Classification-using-Capsule-Networks.
Related papers
- Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - SocialVec: Social Entity Embeddings [1.4010916616909745]
This paper introduces SocialVec, a framework for eliciting social world knowledge from social networks.
We learn social embeddings for roughly 200,000 popular accounts from a sample of the Twitter network.
We exploit SocialVec embeddings for gauging the political bias of news sources in Twitter.
arXiv Detail & Related papers (2021-11-05T14:13:01Z) - Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning [31.33996447671789]
We show that it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs.
This makes our approach a practical adversarial attack when deployed in a real-life setting.
arXiv Detail & Related papers (2021-10-20T16:49:26Z) - Detecting Online Hate Speech: Approaches Using Weak Supervision and
Network Embedding Models [2.3322477552758234]
We propose a weak supervision deep learning model that quantitatively uncover hateful users and (ii) present a novel qualitative analysis to uncover indirect hateful conversations.
We evaluate our model on 19.2M posts and show that our weak supervision model outperforms the baseline models in identifying indirect hateful interactions.
We also analyze a multilayer network, constructed from two types of user interactions in Gab(quote and reply) and interaction scores from the weak supervision model as edge weights, to predict hateful users.
arXiv Detail & Related papers (2020-07-24T18:13:52Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Modeling Aggression Propagation on Social Media [4.99023186931786]
Cyberaggression has been studied in various contexts and online social platforms.
We study propagation of aggression on social media using opinion dynamics.
We propose ways to model how aggression may propagate from one user to another, depending on how each user is connected to other aggressive or regular users.
arXiv Detail & Related papers (2020-02-24T09:50:49Z) - TIES: Temporal Interaction Embeddings For Enhancing Social Media
Integrity At Facebook [9.023847175654602]
We present a novel Temporal Interaction EmbeddingS model that is designed to capture rogue social interactions and flag them for further suitable actions.
TIES is a supervised, deep learning, production ready model at Facebook-scale networks.
To show the real-world impact of TIES, we present a few applications especially for preventing spread of misinformation, fake account detection, and reducing ads payment risks.
arXiv Detail & Related papers (2020-02-18T22:56:40Z) - Social Science Guided Feature Engineering: A Novel Approach to Signed
Link Analysis [58.892336054718825]
Most existing work on link analysis focuses on unsigned social networks.
The existence of negative links piques research interests in investigating whether properties and principles of signed networks differ from those of unsigned networks.
Recent findings suggest that properties of signed networks substantially differ from those of unsigned networks.
arXiv Detail & Related papers (2020-01-04T00:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.