Multimodal Automated Fact-Checking: A Survey
- URL: http://arxiv.org/abs/2305.13507v3
- Date: Wed, 25 Oct 2023 18:43:02 GMT
- Title: Multimodal Automated Fact-Checking: A Survey
- Authors: Mubashara Akhtar, Michael Schlichtkrull, Zhijiang Guo, Oana Cocarascu,
Elena Simperl, Andreas Vlachos
- Abstract summary: Multimodal misinformation is perceived as more credible by humans, and spreads faster than its text-only counterparts.
We conceptualise a framework for automated fact-checking (AFC) including subtasks unique to multimodal misinformation.
We focus on four modalities prevalent in real-world fact-checking: text, image, audio, and video.
- Score: 20.533353006768497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misinformation is often conveyed in multiple modalities, e.g. a miscaptioned
image. Multimodal misinformation is perceived as more credible by humans, and
spreads faster than its text-only counterparts. While an increasing body of
research investigates automated fact-checking (AFC), previous surveys mostly
focus on text. In this survey, we conceptualise a framework for AFC including
subtasks unique to multimodal misinformation. Furthermore, we discuss related
terms used in different communities and map them to our framework. We focus on
four modalities prevalent in real-world fact-checking: text, image, audio, and
video. We survey benchmarks and models, and discuss limitations and promising
directions for future research
Related papers
- Multimodal Large Language Models to Support Real-World Fact-Checking [80.41047725487645]
Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information.
While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied.
We propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking.
arXiv Detail & Related papers (2024-03-06T11:32:41Z) - Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - Detecting and Grounding Multi-Modal Media Manipulation and Beyond [93.08116982163804]
We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
arXiv Detail & Related papers (2023-09-25T15:05:46Z) - MM-Claims: A Dataset for Multimodal Claim Detection in Social Media [7.388174516838141]
We introduce a novel dataset, MM-Claims, which consists of tweets and corresponding images over three topics: COVID-19, Climate Change and broadly Technology.
We describe the dataset in detail, evaluate strong unimodal and multimodal baselines, and analyze the potential and drawbacks of current models.
arXiv Detail & Related papers (2022-05-04T10:43:58Z) - Logically at the Factify 2022: Multimodal Fact Verification [2.8914815569249823]
This paper describes our participant system for the multi-modal fact verification (Factify) challenge at AAAI 2022.
Two baseline approaches are proposed and explored including an ensemble model and a multi-modal attention network.
Our best model is ranked first in leaderboard which obtains a weighted average F-measure of 0.77 on both validation and test set.
arXiv Detail & Related papers (2021-12-16T23:34:07Z) - Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context
Images via Online Resources [70.68526820807402]
A real image is re-purposed to support other narratives by misrepresenting its context and/or elements.
Our goal is an inspectable method that automates this time-consuming and reasoning-intensive process by fact-checking the image-context pairing.
Our work offers the first step and benchmark for open-domain, content-based, multi-modal fact-checking.
arXiv Detail & Related papers (2021-11-30T19:36:20Z) - WebQA: Multihop and Multimodal QA [49.683300706718136]
We propose to bridge the gap between the natural language and computer vision communities with WebQA.
Our challenge is to create a unified multimodal reasoning model that seamlessly transitions and reasons regardless of the source modality.
arXiv Detail & Related papers (2021-09-01T19:43:59Z) - On the Role of Images for Analyzing Claims in Social Media [3.8142537449670963]
We present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection.
Recent work suggests that images are more influential than text and often appear alongside fake text.
arXiv Detail & Related papers (2021-03-17T12:40:27Z) - A Survey on Multimodal Disinformation Detection [33.89798158570927]
We explore the state-of-the-art on multimodal disinformation detection covering various combinations of modalities.
We argue for the need to tackle disinformation detection by taking into account multiple modalities as well as both factuality and harmfulness.
arXiv Detail & Related papers (2021-03-13T18:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.