Similarity-Aware Multimodal Prompt Learning for Fake News Detection
- URL: http://arxiv.org/abs/2304.04187v3
- Date: Fri, 16 Jun 2023 12:05:57 GMT
- Title: Similarity-Aware Multimodal Prompt Learning for Fake News Detection
- Authors: Ye Jiang, Xiaomin Yu, Yimin Wang, Xiaoman Xu, Xingyi Song and Diana
Maynard
- Abstract summary: multimodal fake news detection has outperformed text-only methods.
This paper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE) framework.
For evaluation, SAMPLE surpasses the F1 and the accuracies of previous works on two benchmark multimodal datasets.
- Score: 0.12396474483677114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard paradigm for fake news detection mainly utilizes text
information to model the truthfulness of news. However, the discourse of online
fake news is typically subtle and it requires expert knowledge to use textual
information to debunk fake news. Recently, studies focusing on multimodal fake
news detection have outperformed text-only methods. Recent approaches utilizing
the pre-trained model to extract unimodal features, or fine-tuning the
pre-trained model directly, have become a new paradigm for detecting fake news.
Again, this paradigm either requires a large number of training instances, or
updates the entire set of pre-trained model parameters, making real-world fake
news detection impractical. Furthermore, traditional multimodal methods fuse
the cross-modal features directly without considering that the uncorrelated
semantic representation might inject noise into the multimodal features. This
paper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE)
framework. First, we incorporate prompt learning into multimodal fake news
detection. Prompt learning, which only tunes prompts with a frozen language
model, can reduce memory usage significantly and achieve comparable
performances, compared with fine-tuning. We analyse three prompt templates with
a soft verbalizer to detect fake news. In addition, we introduce the
similarity-aware fusing method to adaptively fuse the intensity of multimodal
representation and mitigate the noise injection via uncorrelated cross-modal
features. For evaluation, SAMPLE surpasses the F1 and the accuracies of
previous works on two benchmark multimodal datasets, demonstrating the
effectiveness of the proposed method in detecting fake news. In addition,
SAMPLE also is superior to other approaches regardless of few-shot and
data-rich settings.
Related papers
- Cross-Modal Augmentation for Few-Shot Multimodal Fake News Detection [0.21990652930491858]
Few-shot learning is critical for detecting fake news in its early stages.
This paper presents a multimodal fake news detection model which augments multimodal features using unimodal features.
The proposed CMA achieves SOTA results over three benchmark datasets.
arXiv Detail & Related papers (2024-07-16T09:32:11Z) - Fake News Detection and Manipulation Reasoning via Large Vision-Language Models [38.457805116130004]
This paper introduces a benchmark for fake news detection and manipulation reasoning, referred to as Human-centric and Fact-related Fake News (HFFN)
The benchmark highlights the centrality of human and the high factual relevance, with detailed manual annotations.
A Multi-modal news Detection and Reasoning langUage Model (M-DRUM) is presented not only to judge on the authenticity of multi-modal news, but also raise analytical reasoning about potential manipulations.
arXiv Detail & Related papers (2024-07-02T08:16:43Z) - FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News Detection [54.37159298632628]
FineFake is a multi-domain knowledge-enhanced benchmark for fake news detection.
FineFake encompasses 16,909 data samples spanning six semantic topics and eight platforms.
The entire FineFake project is publicly accessible as an open-source repository.
arXiv Detail & Related papers (2024-03-30T14:39:09Z) - MSynFD: Multi-hop Syntax aware Fake News Detection [27.046529059563863]
Social media platforms have fueled the rapid dissemination of fake news, posing threats to our real-life society.
Existing methods use multimodal data or contextual information to enhance the detection of fake news.
We propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news.
arXiv Detail & Related papers (2024-02-18T05:40:33Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Detecting and Grounding Multi-Modal Media Manipulation and Beyond [93.08116982163804]
We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
arXiv Detail & Related papers (2023-09-25T15:05:46Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Towards Fast Adaptation of Pretrained Contrastive Models for
Multi-channel Video-Language Retrieval [70.30052749168013]
Multi-channel video-language retrieval require models to understand information from different channels.
contrastive multimodal models are shown to be highly effective at aligning entities in images/videos and text.
There is not a clear way to quickly adapt these two lines to multi-channel video-language retrieval with limited data and resources.
arXiv Detail & Related papers (2022-06-05T01:43:52Z) - Multi-Modal Few-Shot Object Detection with Meta-Learning-Based
Cross-Modal Prompting [77.69172089359606]
We study multi-modal few-shot object detection (FSOD) in this paper, using both few-shot visual examples and class semantic information for detection.
Our approach is motivated by the high-level conceptual similarity of (metric-based) meta-learning and prompt-based learning.
We comprehensively evaluate the proposed multi-modal FSOD models on multiple few-shot object detection benchmarks, achieving promising results.
arXiv Detail & Related papers (2022-04-16T16:45:06Z) - Multimodal Fusion with BERT and Attention Mechanism for Fake News
Detection [0.0]
We present a novel method for detecting fake news by fusing multimodal features derived from textual and visual data.
Experimental results showed that our approach performs better than the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
arXiv Detail & Related papers (2021-04-23T08:47:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.