Detecting and Grounding Multi-Modal Media Manipulation and Beyond
- URL: http://arxiv.org/abs/2309.14203v1
- Date: Mon, 25 Sep 2023 15:05:46 GMT
- Title: Detecting and Grounding Multi-Modal Media Manipulation and Beyond
- Authors: Rui Shao, Tianxing Wu, Jianlong Wu, Liqiang Nie, Ziwei Liu
- Abstract summary: We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
- Score: 93.08116982163804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation has become a pressing issue. Fake media, in both visual and
textual forms, is widespread on the web. While various deepfake detection and
text fake news detection methods have been proposed, they are only designed for
single-modality forgery based on binary classification, let alone analyzing and
reasoning subtle forgery traces across different modalities. In this paper, we
highlight a new research problem for multi-modal fake media, namely Detecting
and Grounding Multi-Modal Media Manipulation (DGM^4). DGM^4 aims to not only
detect the authenticity of multi-modal media, but also ground the manipulated
content, which requires deeper reasoning of multi-modal media manipulation. To
support a large-scale investigation, we construct the first DGM^4 dataset,
where image-text pairs are manipulated by various approaches, with rich
annotation of diverse manipulations. Moreover, we propose a novel HierArchical
Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the
fine-grained interaction between different modalities. HAMMER performs 1)
manipulation-aware contrastive learning between two uni-modal encoders as
shallow manipulation reasoning, and 2) modality-aware cross-attention by
multi-modal aggregator as deep manipulation reasoning. Dedicated manipulation
detection and grounding heads are integrated from shallow to deep levels based
on the interacted multi-modal information. To exploit more fine-grained
contrastive learning for cross-modal semantic alignment, we further integrate
Manipulation-Aware Contrastive Loss with Local View and construct a more
advanced model HAMMER++. Finally, we build an extensive benchmark and set up
rigorous evaluation metrics for this new research problem. Comprehensive
experiments demonstrate the superiority of HAMMER and HAMMER++.
Related papers
- Detecting Misinformation in Multimedia Content through Cross-Modal Entity Consistency: A Dual Learning Approach [10.376378437321437]
We propose a Multimedia Misinformation Detection framework for detecting misinformation from video content by leveraging cross-modal entity consistency.
Our results demonstrate that MultiMD outperforms state-of-the-art baseline models.
arXiv Detail & Related papers (2024-08-16T16:14:36Z) - Harmfully Manipulated Images Matter in Multimodal Misinformation Detection [22.236455110413264]
Multimodal Misinformation Detection (MMD) has attracted growing attention from the academic and industrial communities.
We propose a novel HAMI-M3D method, namely Harmfully Manipulated Images Matter in MMD (HAMI-M3D)
Extensive experiments across three benchmark datasets can demonstrate that HAMI-M3D can consistently improve the performance of any MMD baselines.
arXiv Detail & Related papers (2024-07-27T07:16:07Z) - Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object
Detection [72.36017150922504]
We propose a multi-modal contextual knowledge distillation framework, MMC-Det, to transfer the learned contextual knowledge from a teacher fusion transformer to a student detector.
The diverse multi-modal masked language modeling is realized by an object divergence constraint upon traditional multi-modal masked language modeling (MLM)
arXiv Detail & Related papers (2023-08-30T08:33:13Z) - Inconsistent Matters: A Knowledge-guided Dual-consistency Network for
Multi-modal Rumor Detection [53.48346699224921]
A novel Knowledge-guided Dualconsistency Network is proposed to detect rumors with multimedia contents.
It uses two consistency detectionworks to capture the inconsistency at the cross-modal level and the content-knowledge level simultaneously.
It also enables robust multi-modal representation learning under different missing visual modality conditions.
arXiv Detail & Related papers (2023-06-03T15:32:20Z) - Detecting and Grounding Multi-Modal Media Manipulation [32.34908534582532]
We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
arXiv Detail & Related papers (2023-04-05T16:20:40Z) - Multi-modal Fake News Detection on Social Media via Multi-grained
Information Fusion [21.042970740577648]
We present a Multi-grained Multi-modal Fusion Network (MMFN) for fake news detection.
Inspired by the multi-grained process of human assessment of news authenticity, we respectively employ two Transformer-based pre-trained models to encode token-level features from text and images.
The multi-modal module fuses fine-grained features, taking into account coarse-grained features encoded by the CLIP encoder.
arXiv Detail & Related papers (2023-04-03T09:13:59Z) - Cross-modal Contrastive Learning for Multimodal Fake News Detection [10.760000041969139]
COOLANT is a cross-modal contrastive learning framework for multimodal fake news detection.
A cross-modal fusion module is developed to learn the cross-modality correlations.
An attention guidance module is implemented to help effectively and interpretably aggregate the aligned unimodal representations.
arXiv Detail & Related papers (2023-02-25T10:12:34Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.