Integrating Human-in-the-loop into Swarm Learning for Decentralized Fake
News Detection
- URL: http://arxiv.org/abs/2201.02048v1
- Date: Tue, 4 Jan 2022 01:24:20 GMT
- Title: Integrating Human-in-the-loop into Swarm Learning for Decentralized Fake
News Detection
- Authors: Xishuang Dong and Lijun Qian
- Abstract summary: This paper proposes a novel decentralized method, Human-in-the-loop Based Learning Swarm (HBSL), to integrate user feedback into the loop of learning and inference for recognizing fake news without violating user privacy in a decentralized manner.
Experimental results demonstrate that the proposed method outperforms the state-of-the-art decentralized method in regard to detecting fake news on a benchmark dataset.
- Score: 4.974890682815778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media has become an effective platform to generate and spread fake
news that can mislead people and even distort public opinion. Centralized
methods for fake news detection, however, cannot effectively protect user
privacy during the process of centralized data collection for training models.
Moreover, it cannot fully involve user feedback in the loop of learning
detection models for further enhancing fake news detection. To overcome these
challenges, this paper proposed a novel decentralized method, Human-in-the-loop
Based Swarm Learning (HBSL), to integrate user feedback into the loop of
learning and inference for recognizing fake news without violating user privacy
in a decentralized manner. It consists of distributed nodes that are able to
independently learn and detect fake news on local data. Furthermore, detection
models trained on these nodes can be enhanced through decentralized model
merging. Experimental results demonstrate that the proposed method outperforms
the state-of-the-art decentralized method in regard of detecting fake news on a
benchmark dataset.
Related papers
- A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention [0.0]
We propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models.
Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers.
We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
arXiv Detail & Related papers (2024-07-27T20:00:10Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - No Place to Hide: Dual Deep Interaction Channel Network for Fake News
Detection based on Data Augmentation [16.40196904371682]
We propose a novel framework for fake news detection from perspectives of semantic, emotion and data enhancement.
A dual deep interaction channel network of semantic and emotion is designed to obtain a more comprehensive and fine-grained news representation.
Experiments show that the proposed approach outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-31T13:33:53Z) - Addressing Bias in Face Detectors using Decentralised Data collection
with incentives [0.0]
We show how this data-centric approach can be facilitated in a decentralized manner to enable efficient data collection for algorithms.
We propose a face detection and anonymization approach using a hybrid MultiTask Cascaded CNN with FaceNet Embeddings.
arXiv Detail & Related papers (2022-10-28T09:54:40Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Multimodal Emergent Fake News Detection via Meta Neural Process Networks [36.52739834391597]
We propose an end-to-end fake news detection framework named MetaFEND.
Specifically, the proposed model integrates meta-learning and neural process methods together.
Extensive experiments are conducted on multimedia datasets collected from Twitter and Weibo.
arXiv Detail & Related papers (2021-06-22T21:21:29Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z) - Weak Supervision for Fake News Detection via Reinforcement Learning [34.448503443582396]
We propose a weakly-supervised fake news detection framework, i.e., WeFEND.
The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector.
We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports.
arXiv Detail & Related papers (2019-12-28T21:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.