MONITOR: A Multimodal Fusion Framework to Assess Message Veracity in
Social Networks
- URL: http://arxiv.org/abs/2109.02271v1
- Date: Mon, 6 Sep 2021 07:41:21 GMT
- Title: MONITOR: A Multimodal Fusion Framework to Assess Message Veracity in
Social Networks
- Authors: Abderrazek Azri (ERIC), C\'ecile Favre (ERIC), Nouria Harbi (ERIC),
J\'er\^ome Darmont (ERIC), Camille No\^us
- Abstract summary: Users of social networks tend to post and share content with little restraint.
Rumors and fake news can quickly spread on a huge scale.
This may pose a threat to the credibility of social media and can cause serious consequences in real life.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users of social networks tend to post and share content with little
restraint. Hence, rumors and fake news can quickly spread on a huge scale. This
may pose a threat to the credibility of social media and can cause serious
consequences in real life. Therefore, the task of rumor detection and
verification has become extremely important. Assessing the veracity of a social
media message (e.g., by fact checkers) involves analyzing the text of the
message, its context and any multimedia attachment. This is a very
time-consuming task that can be much helped by machine learning. In the
literature, most message veracity verification methods only exploit textual
contents and metadata. Very few take both textual and visual contents, and more
particularly images, into account. In this paper, we second the hypothesis that
exploiting all of the components of a social media post enhances the accuracy
of veracity detection. To further the state of the art, we first propose using
a set of advanced image features that are inspired from the field of image
quality assessment, which effectively contributes to rumor detection. These
metrics are good indicators for the detection of fake images, even for those
generated by advanced techniques like generative adversarial networks (GANs).
Then, we introduce the Multimodal fusiON framework to assess message veracIty
in social neTwORks (MONITOR), which exploits all message features (i.e., text,
social context, and image features) by supervised machine learning. Such
algorithms provide interpretability and explainability in the decisions taken,
which we believe is particularly important in the context of rumor
verification. Experimental results show that MONITOR can detect rumors with an
accuracy of 96% and 89% on the MediaEval benchmark and the FakeNewsNet dataset,
respectively. These results are significantly better than those of
state-of-the-art machine learning baselines.
Related papers
- Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - Multimodal Fake News Detection with Adaptive Unimodal Representation
Aggregation [28.564442206829625]
AURA is a multimodal fake news detection network with adaptive unimodal representation aggregation.
We perform coarse-level fake news detection and cross-modal cosistency learning according to the unimodal and multimodal representations.
Experiments on Weibo and Gossipcop prove that AURA can successfully beat several state-of-the-art FND schemes.
arXiv Detail & Related papers (2022-06-12T14:06:55Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context
Images via Online Resources [70.68526820807402]
A real image is re-purposed to support other narratives by misrepresenting its context and/or elements.
Our goal is an inspectable method that automates this time-consuming and reasoning-intensive process by fact-checking the image-context pairing.
Our work offers the first step and benchmark for open-domain, content-based, multi-modal fact-checking.
arXiv Detail & Related papers (2021-11-30T19:36:20Z) - Calling to CNN-LSTM for Rumor Detection: A Deep Multi-channel Model for
Message Veracity Classification in Microblogs [0.0]
Rumors can notably cause severe damage on individuals and the society.
Most rumor detection approaches focus on rumor feature analysis and social features.
DeepMONITOR is based on deep neural networks and allows quite accurate automated rumor verification.
arXiv Detail & Related papers (2021-10-11T07:42:41Z) - FR-Detect: A Multi-Modal Framework for Early Fake News Detection on
Social Media Using Publishers Features [0.0]
Despite the advantages of these media in the news field, the lack of any control and verification mechanism has led to the spread of fake news.
We suggest a high accurate multi-modal framework, namely FR-Detect, using user-related and content-related features with early detection capability.
Experiments have shown that the publishers' features can improve the performance of content-based models by up to 13% and 29% in accuracy and F1-score.
arXiv Detail & Related papers (2021-09-10T12:39:00Z) - Multimodal Fusion with BERT and Attention Mechanism for Fake News
Detection [0.0]
We present a novel method for detecting fake news by fusing multimodal features derived from textual and visual data.
Experimental results showed that our approach performs better than the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
arXiv Detail & Related papers (2021-04-23T08:47:54Z) - Cross-Media Keyphrase Prediction: A Unified Framework with
Multi-Modality Multi-Head Attention and Image Wordings [63.79979145520512]
We explore the joint effects of texts and images in predicting the keyphrases for a multimedia post.
We propose a novel Multi-Modality Multi-Head Attention (M3H-Att) to capture the intricate cross-media interactions.
Our model significantly outperforms the previous state of the art based on traditional attention networks.
arXiv Detail & Related papers (2020-11-03T08:44:18Z) - Including Images into Message Veracity Assessment in Social Media [0.0]
Social media has laid a fertile ground for the spread of rumors, which could significantly affect the credibility of social media.
We propose a framework that explores two novel ways to assess the veracity of messages published on social networks by analyzing the credibility of both their textual and visual contents.
arXiv Detail & Related papers (2020-07-20T08:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.