"Image, Tell me your story!" Predicting the original meta-context of visual misinformation
- URL: http://arxiv.org/abs/2408.09939v2
- Date: Tue, 20 Aug 2024 08:59:22 GMT
- Title: "Image, Tell me your story!" Predicting the original meta-context of visual misinformation
- Authors: Jonathan Tonglet, Marie-Francine Moens, Iryna Gurevych,
- Abstract summary: We introduce an automated system that grounds images in their original meta-context using the content of the image and textual evidence retrieved from the open web.
Our experiments show promising results while highlighting several open challenges in retrieval and reasoning.
- Score: 70.52796410062876
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning. We make our code and data publicly available.
Related papers
- Composed Image Retrieval for Remote Sensing [24.107610091033997]
This work introduces composed image retrieval to remote sensing.
It allows to query a large image archive by image examples alternated by a textual description.
A novel method fusing image-to-image and text-to-image similarity is introduced.
arXiv Detail & Related papers (2024-05-24T14:18:31Z) - Knowledge Mining with Scene Text for Fine-Grained Recognition [53.74297368412834]
We propose an end-to-end trainable network that mines implicit contextual knowledge behind scene text image.
We employ KnowBert to retrieve relevant knowledge for semantic representation and combine it with image features for fine-grained classification.
Our method outperforms the state-of-the-art by 3.72% mAP and 5.39% mAP, respectively.
arXiv Detail & Related papers (2022-03-27T05:54:00Z) - Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context
Images via Online Resources [70.68526820807402]
A real image is re-purposed to support other narratives by misrepresenting its context and/or elements.
Our goal is an inspectable method that automates this time-consuming and reasoning-intensive process by fact-checking the image-context pairing.
Our work offers the first step and benchmark for open-domain, content-based, multi-modal fact-checking.
arXiv Detail & Related papers (2021-11-30T19:36:20Z) - NewsCLIPpings: Automatic Generation of Out-of-Context Multimodal Media [93.51739200834837]
We propose a dataset where both image and text are unmanipulated but mismatched.
We introduce several strategies for automatic retrieval of suitable images for the given captions.
Our large-scale automatically generated NewsCLIPpings dataset requires models to jointly analyze both modalities.
arXiv Detail & Related papers (2021-04-13T01:53:26Z) - Content-Based Detection of Temporal Metadata Manipulation [91.34308819261905]
We propose an end-to-end approach to verify whether the purported time of capture of an image is consistent with its content and geographic location.
The central idea is the use of supervised consistency verification, in which we predict the probability that the image content, capture time, and geographical location are consistent.
Our approach improves upon previous work on a large benchmark dataset, increasing the classification accuracy from 59.03% to 81.07%.
arXiv Detail & Related papers (2021-03-08T13:16:19Z) - Catching Out-of-Context Misinformation with Self-supervised Learning [2.435006380732194]
We propose a new method that automatically detects out-of-context image and text pairs.
Our core idea is a self-supervised training strategy where we only need images with matching captions from different sources.
Our method achieves 82% out-of-context detection accuracy.
arXiv Detail & Related papers (2021-01-15T19:00:42Z) - On the General Value of Evidence, and Bilingual Scene-Text Visual
Question Answering [120.64104995052189]
We present a dataset that takes a step towards addressing this problem in that it contains questions expressed in two languages.
Measuring reasoning directly encourages generalization by penalizing answers that are coincidentally correct.
The dataset reflects the scene-text version of the VQA problem, and the reasoning evaluation can be seen as a text-based version of a referring expression challenge.
arXiv Detail & Related papers (2020-02-24T13:02:31Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.