FNR: A Similarity and Transformer-Based Approachto Detect Multi-Modal
FakeNews in Social Media
- URL: http://arxiv.org/abs/2112.01131v1
- Date: Thu, 2 Dec 2021 11:12:09 GMT
- Title: FNR: A Similarity and Transformer-Based Approachto Detect Multi-Modal
FakeNews in Social Media
- Authors: Faeze Ghorbanpour, Maryam Ramezani, Mohammad A. Fazli and Hamid R.
Rabiee
- Abstract summary: This work aims to analyze multi-modal features from texts and images in social media for detecting fake news.
We propose a Fake News Revealer (FNR) method that utilizes transform learning to extract contextual and semantic features.
The results show the proposed method achieves higher accuracies in detecting fake news compared to the previous works.
- Score: 4.607964446694258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability and interactive nature of social media have made them the
primary source of news around the globe. The popularity of social media tempts
criminals to pursue their immoral intentions by producing and disseminating
fake news using seductive text and misleading images. Therefore, verifying
social media news and spotting fakes is crucial. This work aims to analyze
multi-modal features from texts and images in social media for detecting fake
news. We propose a Fake News Revealer (FNR) method that utilizes transform
learning to extract contextual and semantic features and contrastive loss to
determine the similarity between image and text. We applied FNR on two real
social media datasets. The results show the proposed method achieves higher
accuracies in detecting fake news compared to the previous works.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection [15.386007761649251]
We propose a novel Title-Text similarity and emotion-aware Fake news detection (TieFake) method by jointly modeling the multi-modal context information and the author sentiment.
Specifically, we employ BERT and ResNeSt to learn the representations for text and images, and utilize publisher emotion extractor to capture the author's subjective emotion in the news content.
arXiv Detail & Related papers (2023-04-19T04:47:36Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Modelling Social Context for Fake News Detection: A Graph Neural Network
Based Approach [0.39146761527401425]
Detection of fake news is crucial to ensure the authenticity of information and maintain the news ecosystems reliability.
This paper has analyzed the social context of fake news detection with a hybrid graph neural network based approach.
arXiv Detail & Related papers (2022-07-27T12:58:33Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Multimodal Fake News Detection [1.929039244357139]
We perform a fine-grained classification of fake news on the Fakeddit dataset using both unimodal and multimodal approaches.
Some fake news categories such as Manipulated content, Satire or False connection strongly benefit from the use of images.
Using images also improves the results of the other categories, but with less impact.
arXiv Detail & Related papers (2021-12-09T10:57:18Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Multimodal Fusion with BERT and Attention Mechanism for Fake News
Detection [0.0]
We present a novel method for detecting fake news by fusing multimodal features derived from textual and visual data.
Experimental results showed that our approach performs better than the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
arXiv Detail & Related papers (2021-04-23T08:47:54Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.