Modes of Analyzing Disinformation Narratives With AI/ML/Text Mining to Assist in Mitigating the Weaponization of Social Media
- URL: http://arxiv.org/abs/2405.15987v1
- Date: Sat, 25 May 2024 00:02:14 GMT
- Title: Modes of Analyzing Disinformation Narratives With AI/ML/Text Mining to Assist in Mitigating the Weaponization of Social Media
- Authors: Andy Skumanich, Han Kyul Kim,
- Abstract summary: This paper highlights the developing need for quantitative modes for capturing and monitoring malicious communication in social media.
There has been a deliberate "weaponization" of messaging through the use of social networks including by politically oriented entities both state sponsored and privately run.
Despite attempts to introduce moderation on major platforms like Facebook and X/Twitter, there are now established alternative social networks that offer completely unmoderated spaces.
- Score: 0.8287206589886879
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper highlights the developing need for quantitative modes for capturing and monitoring malicious communication in social media. There has been a deliberate "weaponization" of messaging through the use of social networks including by politically oriented entities both state sponsored and privately run. The article identifies a use of AI/ML characterization of generalized "mal-info," a broad term which includes deliberate malicious narratives similar with hate speech, which adversely impact society. A key point of the discussion is that this mal-info will dramatically increase in volume, and it will become essential for sharable quantifying tools to provide support for human expert intervention. Despite attempts to introduce moderation on major platforms like Facebook and X/Twitter, there are now established alternative social networks that offer completely unmoderated spaces. The paper presents an introduction to these platforms and the initial results of a qualitative and semi-quantitative analysis of characteristic mal-info posts. The authors perform a rudimentary text mining function for a preliminary characterization in order to evaluate the modes for better-automated monitoring. The action examines several inflammatory terms using text analysis and, importantly, discusses the use of generative algorithms by one political agent in particular, providing some examples of the potential risks to society. This latter is of grave concern, and monitoring tools must be established. This paper presents a preliminary step to selecting relevant sources and to setting a foundation for characterizing the mal-info, which must be monitored. The AI/ML methods provide a means for semi-quantitative signature capture. The impending use of "mal-GenAI" is presented.
Related papers
- Modeling Political Orientation of Social Media Posts: An Extended
Analysis [0.0]
Developing machine learning models to characterize political polarization on online social media presents significant challenges.
These challenges mainly stem from various factors such as the lack of annotated data, presence of noise in social media datasets, and the sheer volume of data.
We introduce two methods that leverage on news media bias and post content to label social media posts.
We demonstrate that current machine learning models can exhibit improved performance in predicting political orientation of social media posts.
arXiv Detail & Related papers (2023-11-21T03:34:20Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts [91.3755431537592]
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases.
Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding.
This study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery.
arXiv Detail & Related papers (2023-07-05T20:16:20Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Qualitative Analysis of a Graph Transformer Approach to Addressing Hate
Speech: Adapting to Dynamically Changing Content [8.393770595114763]
We offer a detailed qualitative analysis of this solution for hate speech detection in social networks.
A key insight is that the focus on reasoning about the concept of context positions us well to be able to support multi-modal analysis of online posts.
We conclude with a reflection on how the problem we are addressing relates especially well to the theme of dynamic change.
arXiv Detail & Related papers (2023-01-25T23:32:32Z) - A Keyword Based Approach to Understanding the Overpenalization of
Marginalized Groups by English Marginal Abuse Models on Twitter [2.9604738405097333]
Harmful content detection models tend to have higher false positive rates for content from marginalized groups.
We propose a principled approach to detecting and measuring the severity of potential harms associated with a text-based model.
We apply our methodology to audit Twitter's English marginal abuse model, which is used for removing amplification eligibility of marginally abusive content.
arXiv Detail & Related papers (2022-10-07T20:28:00Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.