Multimodal Emergent Fake News Detection via Meta Neural Process Networks
- URL: http://arxiv.org/abs/2106.13711v1
- Date: Tue, 22 Jun 2021 21:21:29 GMT
- Title: Multimodal Emergent Fake News Detection via Meta Neural Process Networks
- Authors: Yaqing Wang, Fenglong Ma, Haoyu Wang, Kishlay Jha and Jing Gao
- Abstract summary: We propose an end-to-end fake news detection framework named MetaFEND.
Specifically, the proposed model integrates meta-learning and neural process methods together.
Extensive experiments are conducted on multimedia datasets collected from Twitter and Weibo.
- Score: 36.52739834391597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fake news travels at unprecedented speeds, reaches global audiences and puts
users and communities at great risk via social media platforms. Deep learning
based models show good performance when trained on large amounts of labeled
data on events of interest, whereas the performance of models tends to degrade
on other events due to domain shift. Therefore, significant challenges are
posed for existing detection approaches to detect fake news on emergent events,
where large-scale labeled datasets are difficult to obtain. Moreover, adding
the knowledge from newly emergent events requires to build a new model from
scratch or continue to fine-tune the model, which can be challenging,
expensive, and unrealistic for real-world settings. In order to address those
challenges, we propose an end-to-end fake news detection framework named
MetaFEND, which is able to learn quickly to detect fake news on emergent events
with a few verified posts. Specifically, the proposed model integrates
meta-learning and neural process methods together to enjoy the benefits of
these approaches. In particular, a label embedding module and a hard attention
mechanism are proposed to enhance the effectiveness by handling categorical
information and trimming irrelevant posts. Extensive experiments are conducted
on multimedia datasets collected from Twitter and Weibo. The experimental
results show our proposed MetaFEND model can detect fake news on never-seen
events effectively and outperform the state-of-the-art methods.
Related papers
- Improving Generalization for Multimodal Fake News Detection [8.595270610973586]
State-of-the-art approaches are usually trained on datasets of smaller size or with a limited set of specific topics.
We propose three models that adopt and fine-tune state-of-the-art multimodal transformers for multimodal fake news detection.
arXiv Detail & Related papers (2023-05-29T20:32:22Z) - Unsupervised Domain-agnostic Fake News Detection using Multi-modal Weak
Signals [19.22829945777267]
This work proposes an effective framework for unsupervised fake news detection, which first embeds the knowledge available in four modalities in news records.
Also, we propose a novel technique to construct news datasets minimizing the latent biases in existing news datasets.
We trained the proposed unsupervised framework using LUND-COVID to exploit the potential of large datasets.
arXiv Detail & Related papers (2023-05-18T23:49:31Z) - No Place to Hide: Dual Deep Interaction Channel Network for Fake News
Detection based on Data Augmentation [16.40196904371682]
We propose a novel framework for fake news detection from perspectives of semantic, emotion and data enhancement.
A dual deep interaction channel network of semantic and emotion is designed to obtain a more comprehensive and fine-grained news representation.
Experiments show that the proposed approach outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-31T13:33:53Z) - Robust Event Classification Using Imperfect Real-world PMU Data [58.26737360525643]
We study robust event classification using imperfect real-world phasor measurement unit (PMU) data.
We develop a novel machine learning framework for training robust event classifiers.
arXiv Detail & Related papers (2021-10-19T17:41:43Z) - Hidden Biases in Unreliable News Detection Datasets [60.71991809782698]
We show that selection bias during data collection leads to undesired artifacts in the datasets.
We observed a significant drop (>10%) in accuracy for all models tested in a clean split with no train/test source overlap.
We suggest future dataset creation include a simple model as a difficulty/bias probe and future model development use a clean non-overlapping site and date split.
arXiv Detail & Related papers (2021-04-20T17:16:41Z) - Event-Related Bias Removal for Real-time Disaster Events [67.2965372987723]
Social media has become an important tool to share information about crisis events such as natural disasters and mass attacks.
Detecting actionable posts that contain useful information requires rapid analysis of huge volume of data in real-time.
We train an adversarial neural model to remove latent event-specific biases and improve the performance on tweet importance classification.
arXiv Detail & Related papers (2020-11-02T02:03:07Z) - Connecting the Dots Between Fact Verification and Fake News Detection [21.564628184287173]
We propose a simple yet effective approach to connect the dots between fact verification and fake news detection.
Our approach makes use of the recent success of fact verification models and enables zero-shot fake news detection.
arXiv Detail & Related papers (2020-10-11T09:28:52Z) - Extensively Matching for Few-shot Learning Event Detection [66.31312496170139]
Event detection models under super-vised learning settings fail to transfer to new event types.
Few-shot learning has not beenexplored in event detection.
We propose two novelloss factors that matching examples in the sup-port set to provide more training signals to themodel.
arXiv Detail & Related papers (2020-06-17T18:30:30Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Weak Supervision for Fake News Detection via Reinforcement Learning [34.448503443582396]
We propose a weakly-supervised fake news detection framework, i.e., WeFEND.
The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector.
We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports.
arXiv Detail & Related papers (2019-12-28T21:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.