Improving Fake News Detection of Influential Domain via Domain- and
Instance-Level Transfer
- URL: http://arxiv.org/abs/2209.08902v1
- Date: Mon, 19 Sep 2022 10:21:13 GMT
- Title: Improving Fake News Detection of Influential Domain via Domain- and
Instance-Level Transfer
- Authors: Qiong Nan, Danding Wang, Yongchun Zhu, Qiang Sheng, Yuhui Shi, Juan
Cao, Jintao Li
- Abstract summary: We propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND)
DITFEND could improve the performance of specific target domains.
Online experiments show that it brings additional improvements over the base models in a real-world scenario.
- Score: 16.886024206337257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Both real and fake news in various domains, such as politics, health, and
entertainment are spread via online social media every day, necessitating fake
news detection for multiple domains. Among them, fake news in specific domains
like politics and health has more serious potential negative impacts on the
real world (e.g., the infodemic led by COVID-19 misinformation). Previous
studies focus on multi-domain fake news detection, by equally mining and
modeling the correlation between domains. However, these multi-domain methods
suffer from a seesaw problem: the performance of some domains is often improved
at the cost of hurting the performance of other domains, which could lead to an
unsatisfying performance in specific domains. To address this issue, we propose
a Domain- and Instance-level Transfer Framework for Fake News Detection
(DITFEND), which could improve the performance of specific target domains. To
transfer coarse-grained domain-level knowledge, we train a general model with
data of all domains from the meta-learning perspective. To transfer
fine-grained instance-level knowledge and adapt the general model to a target
domain, we train a language model on the target domain to evaluate the
transferability of each data instance in source domains and re-weigh each
instance's contribution. Offline experiments on two datasets demonstrate the
effectiveness of DITFEND. Online experiments show that DITFEND brings
additional improvements over the base models in a real-world scenario.
Related papers
- DPOD: Domain-Specific Prompt Tuning for Multimodal Fake News Detection [15.599951180606947]
Fake news using out-of-context images has become widespread and is a relevant problem in this era of information overload.
We explore whether out-of-domain data can help to improve out-of-context misinformation detection of a desired domain.
We propose a novel framework termed DPOD (Domain-specific Prompt-tuning using Out-of-Domain data)
arXiv Detail & Related papers (2023-11-27T08:49:26Z) - Robust Domain Misinformation Detection via Multi-modal Feature Alignment [49.89164555394584]
We propose a robust domain and cross-modal approach for multi-modal misinformation detection.
It reduces the domain shift by aligning the joint distribution of textual and visual modalities.
We also propose a framework that simultaneously considers application scenarios of domain generalization.
arXiv Detail & Related papers (2023-11-24T07:06:16Z) - A Collaborative Transfer Learning Framework for Cross-domain
Recommendation [12.880177078884927]
In the recommendation systems, there are multiple business domains to meet the diverse interests and needs of users.
We propose the Collaborative Cross-Domain Transfer Learning Framework (CCTL) to overcome these challenges.
CCTL evaluates the information gain of the source domain on the target domain using a symmetric companion network.
arXiv Detail & Related papers (2023-06-26T09:43:58Z) - Memory-Guided Multi-View Multi-Domain Fake News Detection [39.035462224569166]
We propose a Memory-guided Multi-view Multi-domain Fake News Detection Framework (M$3$FEND) to address these two challenges.
Specifically, we propose a Domain Memory Bank to enrich domain information which could discover potential domain labels.
With enriched domain information as input, a Domain Adapter could adaptively aggregate discriminative information from multiple views for news in various domains.
arXiv Detail & Related papers (2022-06-26T07:09:23Z) - MDFEND: Multi-domain Fake News Detection [15.767582764441627]
We propose an effective Multi-domain Fake News Detection Model (MDFEND) by utilizing a domain gate to aggregate multiple representations extracted by a mixture of experts.
The experiments show that MDFEND can significantly improve the performance of multi-domain fake news detection.
arXiv Detail & Related papers (2022-01-04T05:28:25Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.