Detect Rumors in Microblog Posts for Low-Resource Domains via
Adversarial Contrastive Learning
- URL: http://arxiv.org/abs/2204.08143v2
- Date: Tue, 19 Apr 2022 05:50:19 GMT
- Title: Detect Rumors in Microblog Posts for Low-Resource Domains via
Adversarial Contrastive Learning
- Authors: Hongzhan Lin, Jing Ma, Liangliang Chen, Zhiwei Yang, Mingfei Cheng,
Guang Chen
- Abstract summary: We propose an adversarial contrastive learning framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced.
Our framework achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
- Score: 8.013665071332388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Massive false rumors emerging along with breaking news or trending topics
severely hinder the truth. Existing rumor detection approaches achieve
promising performance on the yesterday's news, since there is enough corpus
collected from the same domain for model training. However, they are poor at
detecting rumors about unforeseen events especially those propagated in
different languages due to the lack of training data and prior knowledge (i.e.,
low-resource regimes). In this paper, we propose an adversarial contrastive
learning framework to detect rumors by adapting the features learned from
well-resourced rumor data to that of the low-resourced. Our model explicitly
overcomes the restriction of domain and/or language usage via language
alignment and a novel supervised contrastive training paradigm. Moreover, we
develop an adversarial augmentation mechanism to further enhance the robustness
of low-resource rumor representation. Extensive experiments conducted on two
low-resource datasets collected from real-world microblog platforms demonstrate
that our framework achieves much better performance than state-of-the-art
methods and exhibits a superior capacity for detecting rumors at early stages.
Related papers
- MoSECroT: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer [50.40191599304911]
We introduce MoSECroT Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer.
In this paper, we present the first framework that leverages relative representations to construct a common space for the embeddings of a source language PLM and the static word embeddings of a target language.
We show that although our proposed framework is competitive with weak baselines when addressing MoSECroT, it fails to achieve competitive results compared with some strong baselines.
arXiv Detail & Related papers (2024-01-09T21:09:07Z) - Examining the Limitations of Computational Rumor Detection Models Trained on Static Datasets [30.315424983805087]
This paper is in-depth evaluation of the performance gap between content and context-based models.
Our empirical findings demonstrate that context-based models are still overly dependent on the information derived from the rumors' source post.
Based on our experimental results, the paper also offers practical suggestions on how to minimize the effects of temporal concept drift in static datasets.
arXiv Detail & Related papers (2023-09-20T18:27:19Z) - A Unified Contrastive Transfer Framework with Propagation Structure for
Boosting Low-Resource Rumor Detection [11.201348902221257]
existing rumor detection algorithms show promising performance on yesterday's news.
Due to a lack of substantial training data and prior expert knowledge, they are poor at spotting rumors concerning unforeseen events.
We propose a unified contrastive transfer framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced with only few-shot annotations.
arXiv Detail & Related papers (2023-04-04T03:13:03Z) - Zero-Shot Rumor Detection with Propagation Structure via Prompt Learning [24.72097408129496]
Previous studies reveal that due to the lack of annotated resources, rumors presented in minority languages are hard to be detected.
We propose a novel framework based on prompt learning to detect rumors falling in different domains or presented in different languages.
Our proposed model achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
arXiv Detail & Related papers (2022-12-02T12:04:48Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Cross-Lingual Cross-Modal Retrieval with Noise-Robust Learning [25.230786853723203]
We propose a noise-robust cross-lingual cross-modal retrieval method for low-resource languages.
We use Machine Translation to construct pseudo-parallel sentence pairs for low-resource languages.
We introduce a multi-view self-distillation method to learn noise-robust target-language representations.
arXiv Detail & Related papers (2022-08-26T09:32:24Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and
Languages [87.5457337866383]
We introduce the Image-Grounded Language Understanding Evaluation benchmark.
IGLUE brings together visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages.
We find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks.
arXiv Detail & Related papers (2022-01-27T18:53:22Z) - Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
Model [58.27176041092891]
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements.
We propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features from the entangled pretrained cross-lingual representations.
Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts.
arXiv Detail & Related papers (2020-11-23T16:00:42Z) - RP-DNN: A Tweet level propagation context based deep neural networks for
early rumor detection in Social Media [3.253418861583211]
Early rumor detection (ERD) on social media platform is very challenging when limited, incomplete and noisy information is available.
We present a novel hybrid neural network architecture, which combines a character-based bidirectional language model and stacked Long Short-Term Memory (LSTM) networks.
Our models achieve state-of-the-art(SoA) performance for detecting unseen rumors on large augmented data which covers more than 12 events and 2,967 rumors.
arXiv Detail & Related papers (2020-02-28T12:44:34Z) - Rumor Detection on Social Media with Bi-Directional Graph Convolutional
Networks [89.13567439679709]
We propose a novel bi-directional graph model, named Bi-Directional Graph Convolutional Networks (Bi-GCN), to explore both characteristics by operating on both top-down and bottom-up propagation of rumors.
It leverages a GCN with a top-down directed graph of rumor spreading to learn the patterns of rumor propagation, and a GCN with an opposite directed graph of rumor diffusion to capture the structures of rumor dispersion.
arXiv Detail & Related papers (2020-01-17T15:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.