On the Role of Images for Analyzing Claims in Social Media
- URL: http://arxiv.org/abs/2103.09602v1
- Date: Wed, 17 Mar 2021 12:40:27 GMT
- Title: On the Role of Images for Analyzing Claims in Social Media
- Authors: Gullal S. Cheema and Sherzod Hakimov and Eric M\"uller-Budack and
Ralph Ewerth
- Abstract summary: We present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection.
Recent work suggests that images are more influential than text and often appear alongside fake text.
- Score: 3.8142537449670963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fake news is a severe problem in social media. In this paper, we present an
empirical study on visual, textual, and multimodal models for the tasks of
claim, claim check-worthiness, and conspiracy detection, all of which are
related to fake news detection. Recent work suggests that images are more
influential than text and often appear alongside fake text. To this end,
several multimodal models have been proposed in recent years that use images
along with text to detect fake news on social media sites like Twitter.
However, the role of images is not well understood for claim detection,
specifically using transformer-based textual and multimodal models. We
investigate state-of-the-art models for images, text (Transformer-based), and
multimodal information for four different datasets across two languages to
understand the role of images in the task of claim and conspiracy detection.
Related papers
- Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - Benchmarking Robustness of Multimodal Image-Text Models under
Distribution Shift [50.64474103506595]
We investigate the robustness of 12 popular open-sourced image-text models under common perturbations on five tasks.
Character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data.
arXiv Detail & Related papers (2022-12-15T18:52:03Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Multimodal Fake News Detection with Adaptive Unimodal Representation
Aggregation [28.564442206829625]
AURA is a multimodal fake news detection network with adaptive unimodal representation aggregation.
We perform coarse-level fake news detection and cross-modal cosistency learning according to the unimodal and multimodal representations.
Experiments on Weibo and Gossipcop prove that AURA can successfully beat several state-of-the-art FND schemes.
arXiv Detail & Related papers (2022-06-12T14:06:55Z) - MM-Claims: A Dataset for Multimodal Claim Detection in Social Media [7.388174516838141]
We introduce a novel dataset, MM-Claims, which consists of tweets and corresponding images over three topics: COVID-19, Climate Change and broadly Technology.
We describe the dataset in detail, evaluate strong unimodal and multimodal baselines, and analyze the potential and drawbacks of current models.
arXiv Detail & Related papers (2022-05-04T10:43:58Z) - Multimodal Fake News Detection [1.929039244357139]
We perform a fine-grained classification of fake news on the Fakeddit dataset using both unimodal and multimodal approaches.
Some fake news categories such as Manipulated content, Satire or False connection strongly benefit from the use of images.
Using images also improves the results of the other categories, but with less impact.
arXiv Detail & Related papers (2021-12-09T10:57:18Z) - FNR: A Similarity and Transformer-Based Approachto Detect Multi-Modal
FakeNews in Social Media [4.607964446694258]
This work aims to analyze multi-modal features from texts and images in social media for detecting fake news.
We propose a Fake News Revealer (FNR) method that utilizes transform learning to extract contextual and semantic features.
The results show the proposed method achieves higher accuracies in detecting fake news compared to the previous works.
arXiv Detail & Related papers (2021-12-02T11:12:09Z) - Multimodal Fusion with BERT and Attention Mechanism for Fake News
Detection [0.0]
We present a novel method for detecting fake news by fusing multimodal features derived from textual and visual data.
Experimental results showed that our approach performs better than the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
arXiv Detail & Related papers (2021-04-23T08:47:54Z) - NewsCLIPpings: Automatic Generation of Out-of-Context Multimodal Media [93.51739200834837]
We propose a dataset where both image and text are unmanipulated but mismatched.
We introduce several strategies for automatic retrieval of suitable images for the given captions.
Our large-scale automatically generated NewsCLIPpings dataset requires models to jointly analyze both modalities.
arXiv Detail & Related papers (2021-04-13T01:53:26Z) - News Image Steganography: A Novel Architecture Facilitates the Fake News
Identification [52.83247667841588]
A larger portion of fake news quotes untampered images from other sources with ulterior motives.
This paper proposes an architecture named News Image Steganography to reveal the inconsistency through image steganography based on GAN.
arXiv Detail & Related papers (2021-01-03T11:12:23Z) - Text as Neural Operator: Image Manipulation by Text Instruction [68.53181621741632]
In this paper, we study a setting that allows users to edit an image with multiple objects using complex text instructions to add, remove, or change the objects.
The inputs of the task are multimodal including (1) a reference image and (2) an instruction in natural language that describes desired modifications to the image.
We show that the proposed model performs favorably against recent strong baselines on three public datasets.
arXiv Detail & Related papers (2020-08-11T07:07:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.