MEWS: Real-time Social Media Manipulation Detection and Analysis
- URL: http://arxiv.org/abs/2205.05783v2
- Date: Fri, 13 May 2022 00:37:18 GMT
- Title: MEWS: Real-time Social Media Manipulation Detection and Analysis
- Authors: Trenton W. Ford, William Theisen, Michael Yankoski, Tom Henry, Farah
Khashman, Katherine R. Dearstyne and Tim Weninger
- Abstract summary: MEWS identifies manipulated media items as they arise and identify when these particular items begin trending on individual social media platforms or even across multiple platforms.
The emergence of a novel manipulation followed by rapid diffusion of the manipulated content suggests a disinformation campaign.
- Score: 5.1568081122003395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article presents a beta-version of MEWS (Misinformation Early Warning
System). It describes the various aspects of the ingestion, manipulation
detection, and graphing algorithms employed to determine--in near
real-time--the relationships between social media images as they emerge and
spread on social media platforms. By combining these various technologies into
a single processing pipeline, MEWS can identify manipulated media items as they
arise and identify when these particular items begin trending on individual
social media platforms or even across multiple platforms. The emergence of a
novel manipulation followed by rapid diffusion of the manipulated content
suggests a disinformation campaign.
Related papers
- AMMeBa: A Large-Scale Survey and Dataset of Media-Based Misinformation In-The-Wild [1.4193873432298625]
We show the results of a two-year study using human raters to annotate online media-based misinformation.
We show the rise of generative AI-based content in misinformation claims.
We also show "simple" methods dominated historically, particularly context manipulations.
arXiv Detail & Related papers (2024-05-19T23:05:53Z) - Detecting and Grounding Multi-Modal Media Manipulation and Beyond [93.08116982163804]
We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
arXiv Detail & Related papers (2023-09-25T15:05:46Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - Misinformation Detection in Social Media Video Posts [0.4724825031148411]
Short-form video by social media platforms has become a critical challenge for social media providers.
We develop methods to detect misinformation in social media posts, exploiting modalities such as video and text.
We collect 160,000 video posts from Twitter, and leverage self-supervised learning to learn expressive representations of joint visual and textual data.
arXiv Detail & Related papers (2022-02-15T20:14:54Z) - MONITOR: A Multimodal Fusion Framework to Assess Message Veracity in
Social Networks [0.0]
Users of social networks tend to post and share content with little restraint.
Rumors and fake news can quickly spread on a huge scale.
This may pose a threat to the credibility of social media and can cause serious consequences in real life.
arXiv Detail & Related papers (2021-09-06T07:41:21Z) - Technological Approaches to Detecting Online Disinformation and
Manipulation [0.0]
The move of propaganda and disinformation to the online environment is possible thanks to the fact that within the last decade, digital information channels radically increased in popularity as a news source.
In this chapter, an overview of computer-supported approaches to detecting disinformation and manipulative techniques based on several criteria is presented.
arXiv Detail & Related papers (2021-08-26T09:28:50Z) - Streaming Social Event Detection and Evolution Discovery in
Heterogeneous Information Networks [90.3475746663728]
Events are happening in real-world and real-time, which can be planned and organized for occasions, such as social gatherings, festival celebrations, influential meetings or sports activities.
Social media platforms generate a lot of real-time text information regarding public events with different topics.
However, mining social events is challenging because events typically exhibit heterogeneous texture and metadata are often ambiguous.
arXiv Detail & Related papers (2021-04-02T02:13:10Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.