MiDAS: Multi-integrated Domain Adaptive Supervision for Fake News
Detection
- URL: http://arxiv.org/abs/2205.09817v1
- Date: Thu, 19 May 2022 19:36:08 GMT
- Title: MiDAS: Multi-integrated Domain Adaptive Supervision for Fake News
Detection
- Authors: Abhijit Suprem and Calton Pu
- Abstract summary: We propose MiDAS, a multi-domain adaptative approach for fake news detection.
MiDAS ranks relevancy of existing models to new samples.
We evaluate MiDAS on generalization to drifted data with 9 fake news datasets.
- Score: 3.210653757360955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: COVID-19 related misinformation and fake news, coined an 'infodemic', has
dramatically increased over the past few years. This misinformation exhibits
concept drift, where the distribution of fake news changes over time, reducing
effectiveness of previously trained models for fake news detection. Given a set
of fake news models trained on multiple domains, we propose an adaptive
decision module to select the best-fit model for a new sample. We propose
MiDAS, a multi-domain adaptative approach for fake news detection that ranks
relevancy of existing models to new samples. MiDAS contains 2 components: a
doman-invariant encoder, and an adaptive model selector. MiDAS integrates
multiple pre-trained and fine-tuned models with their training data to create a
domain-invariant representation. Then, MiDAS uses local Lipschitz smoothness of
the invariant embedding space to estimate each model's relevance to a new
sample. Higher ranked models provide predictions, and lower ranked models
abstain. We evaluate MiDAS on generalization to drifted data with 9 fake news
datasets, each obtained from different domains and modalities. MiDAS achieves
new state-of-the-art performance on multi-domain adaptation for
out-of-distribution fake news classification.
Related papers
- MAPX: An explainable model-agnostic framework for the detection of false information on social media networks [1.5196326555431678]
We introduce a novel model-agnostic framework, called MAPX, which allows evidence based aggregation of predictions.
We perform extensive experiments on benchmarked fake news datasets to demonstrate the effectiveness of MAPX.
Our empirical results show that the proposed framework consistently outperforms all state-of-the-art models evaluated.
arXiv Detail & Related papers (2024-09-13T03:45:10Z) - GM-DF: Generalized Multi-Scenario Deepfake Detection [49.072106087564144]
Existing face forgery detection usually follows the paradigm of training models in a single domain.
In this paper, we elaborately investigate the generalization capacity of deepfake detection models when jointly trained on multiple face forgery detection datasets.
arXiv Detail & Related papers (2024-06-28T17:42:08Z) - DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant
Forgery Clues [32.045504965382015]
Current deepfake detection models can generally recognize forgery images by training on a large dataset.
The accuracy of detection models degrades significantly on images generated by new deepfake methods due to the difference in data distribution.
We present a novel incremental learning framework that improves the generalization of deepfake detection models.
arXiv Detail & Related papers (2023-09-18T07:02:26Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - Cross-Domain Video Anomaly Detection without Target Domain Adaptation [38.823721272155616]
Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain.
This requires laborious model-tuning by the end-user who may prefer to have a system that works out-of-the-box"
arXiv Detail & Related papers (2022-12-14T03:48:00Z) - Multi-domain Learning for Updating Face Anti-spoofing Models [17.506385040102213]
We present a new model for MD-FAS, which addresses the forgetting issue when learning new domain data.
First, we devise a simple yet effective module, called spoof region estimator(SRE), to identify spoof traces in the spoof image.
Unlike prior works that estimate spoof traces which generate multiple outputs or a low-resolution binary mask, SRE produces one single, detailed pixel-wise estimate in an unsupervised manner.
arXiv Detail & Related papers (2022-08-23T18:28:34Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - Transformer-based Language Model Fine-tuning Methods for COVID-19 Fake
News Detection [7.29381091750894]
We propose a novel transformer-based language model fine-tuning approach for these fake news detection.
First, the token vocabulary of individual model is expanded for the actual semantics of professional phrases.
Last, the predicted features extracted by universal language model RoBERTa and domain-specific model CT-BERT are fused by one multiple layer perception to integrate fine-grained and high-level specific representations.
arXiv Detail & Related papers (2021-01-14T09:05:42Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.