Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and
Cross-Attention
- URL: http://arxiv.org/abs/2303.11945v1
- Date: Mon, 20 Mar 2023 06:19:49 GMT
- Title: Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and
Cross-Attention
- Authors: Hongyan Ran and Caiyan Jia
- Abstract summary: Massive rumors usually appear along with breaking news or trending topics, seriously hindering the truth.
Existing rumor detection methods are mostly focused on the same domain, and thus have poor performance in cross-domain scenarios.
We propose an end-to-end instance-wise and prototype-wise contrastive learning model with a cross-attention mechanism for cross-domain rumor detection.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Massive rumors usually appear along with breaking news or trending topics,
seriously hindering the truth. Existing rumor detection methods are mostly
focused on the same domain, and thus have poor performance in cross-domain
scenarios due to domain shift. In this work, we propose an end-to-end
instance-wise and prototype-wise contrastive learning model with a
cross-attention mechanism for cross-domain rumor detection. The model not only
performs cross-domain feature alignment but also enforces target samples to
align with the corresponding prototypes of a given source domain. Since target
labels in a target domain are unavailable, we use a clustering-based approach
with carefully initialized centers by a batch of source domain samples to
produce pseudo labels. Moreover, we use a cross-attention mechanism on a pair
of source data and target data with the same labels to learn domain-invariant
representations. Because the samples in a domain pair tend to express similar
semantic patterns, especially on the people's attitudes (e.g., supporting or
denying) towards the same category of rumors, the discrepancy between a pair of
the source domain and target domain will be decreased. We conduct experiments
on four groups of cross-domain datasets and show that our proposed model
achieves state-of-the-art performance.
Related papers
- Multi-modal Instance Refinement for Cross-domain Action Recognition [25.734898762987083]
Unsupervised cross-domain action recognition aims at adapting the model trained on an existing labeled source domain to a new unlabeled target domain.
We propose a Multi-modal Instance Refinement (MMIR) method to alleviate the negative transfer based on reinforcement learning.
Our method finally outperforms several other state-of-the-art baselines in cross-domain action recognition on the benchmark EPIC-Kitchens dataset.
arXiv Detail & Related papers (2023-11-24T05:06:28Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.