MetaAdapt: Domain Adaptive Few-Shot Misinformation Detection via Meta
Learning
- URL: http://arxiv.org/abs/2305.12692v1
- Date: Mon, 22 May 2023 04:00:38 GMT
- Title: MetaAdapt: Domain Adaptive Few-Shot Misinformation Detection via Meta
Learning
- Authors: Zhenrui Yue, Huimin Zeng, Yang Zhang, Lanyu Shang, Dong Wang
- Abstract summary: We propose MetaAdapt, a meta learning based approach for domain adaptive few-shot misinformation detection.
In particular, we train the initial model with multiple source tasks and compute their similarity scores to the meta task.
As such, MetaAdapt can learn how to adapt the misinformation detection model and exploit the source data for improved performance in the target domain.
- Score: 10.554043875365155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With emerging topics (e.g., COVID-19) on social media as a source for the
spreading misinformation, overcoming the distributional shifts between the
original training domain (i.e., source domain) and such target domains remains
a non-trivial task for misinformation detection. This presents an elusive
challenge for early-stage misinformation detection, where a good amount of data
and annotations from the target domain is not available for training. To
address the data scarcity issue, we propose MetaAdapt, a meta learning based
approach for domain adaptive few-shot misinformation detection. MetaAdapt
leverages limited target examples to provide feedback and guide the knowledge
transfer from the source to the target domain (i.e., learn to adapt). In
particular, we train the initial model with multiple source tasks and compute
their similarity scores to the meta task. Based on the similarity scores, we
rescale the meta gradients to adaptively learn from the source tasks. As such,
MetaAdapt can learn how to adapt the misinformation detection model and exploit
the source data for improved performance in the target domain. To demonstrate
the efficiency and effectiveness of our method, we perform extensive
experiments to compare MetaAdapt with state-of-the-art baselines and large
language models (LLMs) such as LLaMA, where MetaAdapt achieves better
performance in domain adaptive few-shot misinformation detection with
substantially reduced parameters on real-world datasets.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Post-Deployment Adaptation with Access to Source Data via Federated
Learning and Source-Target Remote Gradient Alignment [8.288631856590165]
Post-Deployment Adaptation (PDA) addresses this by tailoring a pre-trained, deployed model to the target data distribution.
PDA assumes no access to source training data as they cannot be deployed with the model due to privacy concerns.
This paper introduces FedPDA, a novel adaptation framework that brings the utility of learning from remote data from Federated Learning into PDA.
arXiv Detail & Related papers (2023-08-31T13:52:28Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - On-target Adaptation [82.77980951331854]
Domain adaptation seeks to mitigate the shift between training on the emphsource domain and testing on the emphtarget domain.
Most adaptation methods rely on the source data by joint optimization over source data and target data.
We show significant improvement by on-target adaptation, which learns the representation purely from target data.
arXiv Detail & Related papers (2021-09-02T17:04:18Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Transfer Meta-Learning: Information-Theoretic Bounds and Information
Meta-Risk Minimization [47.7605527786164]
Meta-learning automatically infers an inductive bias by observing data from a number of related tasks.
We introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing.
arXiv Detail & Related papers (2020-11-04T12:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.