Analysis of Online Toxicity Detection Using Machine Learning Approaches
- URL: http://arxiv.org/abs/2108.01062v1
- Date: Fri, 23 Apr 2021 04:29:13 GMT
- Title: Analysis of Online Toxicity Detection Using Machine Learning Approaches
- Authors: Anjum, Rahul Katarya
- Abstract summary: Social media and the internet have become an integral part of how people spread and consume information.
Almost half of the population is using social media to express their views and opinions.
Online hate speech is one of the drawbacks of social media nowadays, which needs to be controlled.
- Score: 6.548580592686076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media and the internet have become an integral part of how people
spread and consume information. Over a period of time, social media evolved
dramatically, and almost half of the population is using social media to
express their views and opinions. Online hate speech is one of the drawbacks of
social media nowadays, which needs to be controlled. In this paper, we will
understand how hate speech originated and what are the consequences of it;
Trends of machine-learning algorithms to solve an online hate speech problem.
This study contributes by providing a systematic approach to help researchers
to identify a new research direction and elucidating the shortcomings of the
studies and model, as well as providing future directions to advance the field.
Related papers
- hateUS -- Analysis, impact of Social media use and Hate speech over University Student platforms: Case study, Problems, and Solutions [0.0]
The case study is related to social media use and hate speech related to public debates over university students.
The use of NO phone times and NO phone zones is now popular in workplaces and family cultures.
The future challenges including health issues of social media use and hate speech has a serious impact on livelihood, freedom, and diverse communities of university students.
arXiv Detail & Related papers (2024-10-26T04:25:49Z) - Community Shaping in the Digital Age: A Temporal Fusion Framework for Analyzing Discourse Fragmentation in Online Social Networks [45.58331196717468]
This research presents a framework for analyzing the dynamics of online communities in social media platforms.
By combining text classification and dynamic social network analysis, we uncover mechanisms driving community formation and evolution.
arXiv Detail & Related papers (2024-09-18T03:03:02Z) - Topological Data Mapping of Online Hate Speech, Misinformation, and
General Mental Health: A Large Language Model Based Study [6.803493330690884]
Recent advances in machine learning and large language models have made such an analysis possible.
In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit.
We performed various machine-learning classifications based on embeddings in order to understand the role of hate speech/misinformation in various communities.
arXiv Detail & Related papers (2023-09-22T15:10:36Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Addressing Hate Speech with Data Science: An Overview from Computer
Science Perspective [2.2940141855172027]
From a computer science perspective, addressing on-line hate speech is a challenging task that is attracting the attention of both industry (mainly social media platform owners) and academia.
We provide an overview of state-of-the-art data-science approaches - how they define hate speech, which tasks they solve to mitigate the phenomenon, and how they address these tasks.
We summarize the challenges and the open problems in the current data-science research and the future directions in this field.
arXiv Detail & Related papers (2021-03-18T19:19:44Z) - DeepHate: Hate Speech Detection via Multi-Faceted Text Representations [8.192671048046687]
DeepHate is a novel deep learning model that combines multi-faceted text representations such as word embeddings, sentiments, and topical information.
We conduct extensive experiments and evaluate DeepHate on three large publicly available real-world datasets.
arXiv Detail & Related papers (2021-03-14T16:11:30Z) - Analysing Social Media Network Data with R: Semi-Automated Screening of
Users, Comments and Communication Patterns [0.0]
Communication on social media platforms is increasingly widespread across societies.
Fake news, hate speech and radicalizing elements are part of this modern form of communication.
A basic understanding of these mechanisms and communication patterns could help to counteract negative forms of communication.
arXiv Detail & Related papers (2020-11-26T14:52:01Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Mining Disinformation and Fake News: Concepts, Methods, and Recent
Advancements [55.33496599723126]
disinformation including fake news has become a global phenomenon due to its explosive growth.
Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation.
arXiv Detail & Related papers (2020-01-02T21:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.