Fake News Detection: Experiments and Approaches beyond Linguistic
Features
- URL: http://arxiv.org/abs/2109.12914v1
- Date: Mon, 27 Sep 2021 10:00:44 GMT
- Title: Fake News Detection: Experiments and Approaches beyond Linguistic
Features
- Authors: Shaily Bhatt, Sakshi Kalra, Naman Goenka, Yashvardhan Sharma
- Abstract summary: Credibility information and metadata associated with the news article have been used for improved results.
The experiments also show how modelling justification or evidence can lead to improved results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Easier access to the internet and social media has made disseminating
information through online sources very easy. Sources like Facebook, Twitter,
online news sites and personal blogs of self-proclaimed journalists have become
significant players in providing news content. The sheer amount of information
and the speed at which it is generated online makes it practically beyond the
scope of human verification. There is, hence, a pressing need to develop
technologies that can assist humans with automatic fact-checking and reliable
identification of fake news. This paper summarizes the multiple approaches that
were undertaken and the experiments that were carried out for the task.
Credibility information and metadata associated with the news article have been
used for improved results. The experiments also show how modelling
justification or evidence can lead to improved results. Additionally, the use
of visual features in addition to linguistic features is demonstrated. A
detailed comparison of the results showing that our models perform
significantly well when compared to robust baselines as well as
state-of-the-art models are presented.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - fakenewsbr: A Fake News Detection Platform for Brazilian Portuguese [0.6775616141339018]
This paper presents a comprehensive study on detecting fake news in Brazilian Portuguese.
We propose a machine learning-based approach that leverages natural language processing techniques, including TF-IDF and Word2Vec.
We develop a user-friendly web platform, fakenewsbr.com, to facilitate the verification of news articles' veracity.
arXiv Detail & Related papers (2023-09-20T04:10:03Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - It's All in the Embedding! Fake News Detection Using Document Embeddings [0.6091702876917281]
We propose a new approach that uses document embeddings to build multiple models that accurately label news articles as reliable or fake.
We also present a benchmark on different architectures that detect fake news using binary or multi-labeled classification.
arXiv Detail & Related papers (2023-04-16T13:30:06Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.