Unified Fake News Detection using Transfer Learning of Bidirectional
Encoder Representation from Transformers model
- URL: http://arxiv.org/abs/2202.01907v1
- Date: Thu, 3 Feb 2022 23:23:26 GMT
- Title: Unified Fake News Detection using Transfer Learning of Bidirectional
Encoder Representation from Transformers model
- Authors: Vijay Srinivas Tida, Dr. Sonya Hsu and Dr. Xiali Hei
- Abstract summary: This paper attempts to develop a unified model by combining publicly available datasets to detect fake news samples effectively.
Most of the prior models were designed and validated on individual datasets separately.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic detection of fake news is needed for the public as the
accessibility of social media platforms has been increasing rapidly. Most of
the prior models were designed and validated on individual datasets separately.
But the lack of generalization in models might lead to poor performance when
deployed in real-world applications since the individual datasets only cover
limited subjects and sequence length over the samples. This paper attempts to
develop a unified model by combining publicly available datasets to detect fake
news samples effectively.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Few-shot Online Anomaly Detection and Segmentation [29.693357653538474]
This paper focuses on addressing the challenging yet practical few-shot online anomaly detection and segmentation (FOADS) task.
Under the FOADS framework, models are trained on a few-shot normal dataset, followed by inspection and improvement of their capabilities by leveraging unlabeled streaming data containing both normal and abnormal samples simultaneously.
In order to achieve improved performance with limited training samples, we employ multi-scale feature embedding extracted from a CNN pre-trained on ImageNet to obtain a robust representation.
arXiv Detail & Related papers (2024-03-27T02:24:00Z) - Dirichlet-based Uncertainty Quantification for Personalized Federated
Learning with Improved Posterior Networks [9.54563359677778]
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones.
It is achieved through a careful modeling of predictive uncertainties that helps to detect local and global in- and out-of-distribution data.
The comprehensive experimental evaluation on the popular real-world image datasets shows the superior performance of the model in the presence of out-of-distribution data.
arXiv Detail & Related papers (2023-12-18T14:30:05Z) - Image change detection with only a few samples [7.5780621370948635]
A major impediment of image change detection task is the lack of large annotated datasets covering a wide variety of scenes.
We propose using simple image processing methods for generating synthetic but informative datasets.
We then design an early fusion network based on object detection which could outperform the siamese neural network.
arXiv Detail & Related papers (2023-11-07T07:01:35Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Improving Generalization for Multimodal Fake News Detection [8.595270610973586]
State-of-the-art approaches are usually trained on datasets of smaller size or with a limited set of specific topics.
We propose three models that adopt and fine-tune state-of-the-art multimodal transformers for multimodal fake news detection.
arXiv Detail & Related papers (2023-05-29T20:32:22Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - MiDAS: Multi-integrated Domain Adaptive Supervision for Fake News
Detection [3.210653757360955]
We propose MiDAS, a multi-domain adaptative approach for fake news detection.
MiDAS ranks relevancy of existing models to new samples.
We evaluate MiDAS on generalization to drifted data with 9 fake news datasets.
arXiv Detail & Related papers (2022-05-19T19:36:08Z) - Hidden Biases in Unreliable News Detection Datasets [60.71991809782698]
We show that selection bias during data collection leads to undesired artifacts in the datasets.
We observed a significant drop (>10%) in accuracy for all models tested in a clean split with no train/test source overlap.
We suggest future dataset creation include a simple model as a difficulty/bias probe and future model development use a clean non-overlapping site and date split.
arXiv Detail & Related papers (2021-04-20T17:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.