An Emotion-Aware Multi-Task Approach to Fake News and Rumour Detection
using Transfer Learning
- URL: http://arxiv.org/abs/2211.12374v1
- Date: Tue, 22 Nov 2022 16:15:25 GMT
- Title: An Emotion-Aware Multi-Task Approach to Fake News and Rumour Detection
using Transfer Learning
- Authors: Arjun Choudhry, Inder Khatri, Minni Jain, Dinesh Kumar Vishwakarma
- Abstract summary: We show the correlation between the legitimacy of a text with its intrinsic emotion for fake news and rumour detection.
We propose a multi-task framework for fake news and rumour detection, predicting both the emotion and legitimacy of the text.
- Score: 13.448658162594603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networking sites, blogs, and online articles are instant sources of
news for internet users globally. However, in the absence of strict regulations
mandating the genuineness of every text on social media, it is probable that
some of these texts are fake news or rumours. Their deceptive nature and
ability to propagate instantly can have an adverse effect on society. This
necessitates the need for more effective detection of fake news and rumours on
the web. In this work, we annotate four fake news detection and rumour
detection datasets with their emotion class labels using transfer learning. We
show the correlation between the legitimacy of a text with its intrinsic
emotion for fake news and rumour detection, and prove that even within the same
emotion class, fake and real news are often represented differently, which can
be used for improved feature extraction. Based on this, we propose a multi-task
framework for fake news and rumour detection, predicting both the emotion and
legitimacy of the text. We train a variety of deep learning models in
single-task and multi-task settings for a more comprehensive comparison. We
further analyze the performance of our multi-task approach for fake news
detection in cross-domain settings to verify its efficacy for better
generalization across datasets, and to verify that emotions act as a
domain-independent feature. Experimental results verify that our multi-task
models consistently outperform their single-task counterparts in terms of
accuracy, precision, recall, and F1 score, both for in-domain and cross-domain
settings. We also qualitatively analyze the difference in performance in
single-task and multi-task learning models.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection [15.386007761649251]
We propose a novel Title-Text similarity and emotion-aware Fake news detection (TieFake) method by jointly modeling the multi-modal context information and the author sentiment.
Specifically, we employ BERT and ResNeSt to learn the representations for text and images, and utilize publisher emotion extractor to capture the author's subjective emotion in the news content.
arXiv Detail & Related papers (2023-04-19T04:47:36Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Supporting verification of news articles with automated search for
semantically similar articles [0.0]
We propose an evidence retrieval approach to handle fake news.
The learning task is formulated as an unsupervised machine learning problem.
We find that our approach is agnostic to concept drifts, i.e. the machine learning task is independent of the hypotheses in a text.
arXiv Detail & Related papers (2021-03-29T12:56:59Z) - Embracing Domain Differences in Fake News: Cross-domain Fake News
Detection using Multi-modal Data [18.66426327152407]
We propose a novel framework that jointly preserves domain-specific and cross-domain knowledge in news records to detect fake news from different domains.
Our experiments show that the integration of the proposed fake news model and the selective annotation approach achieves state-of-the-art performance for cross-domain news datasets.
arXiv Detail & Related papers (2021-02-11T23:31:14Z) - Cross-Domain Learning for Classifying Propaganda in Online Contents [67.10699378370752]
We present an approach to leverage cross-domain learning, based on labeled documents and sentences from news and tweets, as well as political speeches with a clear difference in their degrees of being propagandistic.
Our experiments demonstrate the usefulness of this approach, and identify difficulties and limitations in various configurations of sources and targets for the transfer step.
arXiv Detail & Related papers (2020-11-13T10:19:13Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - A Deep Learning Approach for Automatic Detection of Fake News [47.00462375817434]
We propose two models based on deep learning for solving fake news detection problem in online news contents of multiple domains.
We evaluate our techniques on the two recently released datasets, namely FakeNews AMT and Celebrity for fake news detection.
arXiv Detail & Related papers (2020-05-11T09:07:46Z) - SAFE: Similarity-Aware Multi-Modal Fake News Detection [8.572654816871873]
We propose a new method to detect fake news based on its text, images, or their "mismatches"
Such representations of news textual and visual information along with their relationship are jointly learned and used to predict fake news.
We conduct extensive experiments on large-scale real-world data, which demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-02-19T02:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.